{"before_sent": " Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and natural data.", "before_sent_with_intent": " Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and natural data.", "after_sent": " The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data. Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and natural data.", "labels": "meaning-changed", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and natural data. This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "before_sent_with_intent": " Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and natural data. This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "after_sent": " Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "labels": "clarity", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "before_sent_with_intent": " This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "after_sent": " This improves standard back-translation and also enabling the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "labels": "fluency", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "before_sent_with_intent": " This improves standard back-translation and also enables the use of iterative back-translation on language pairs that underperformed using standard back-translation.", "after_sent": " This improves standard back-translation and also enables the use of iterative back-translation on language pairs that under-performed using standard back-translation.", "labels": "fluency", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " This work presents a simplified approach of differentiating between the two data using pretraining and finetuning .", "before_sent_with_intent": " This work presents a simplified approach of differentiating between the two data using pretraining and finetuning .", "after_sent": " This work presents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data using pretraining and finetuning .", "labels": "coherence", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " This work presents a simplified approach of differentiating between the two data using pretraining and finetuning .", "before_sent_with_intent": " This work presents a simplified approach of differentiating between the two data using pretraining and finetuning .", "after_sent": " This work presents a simplified approach of differentiating between the two data .", "labels": "clarity", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " The approach - tag-less back-translation - trains the model on the synthetic data and finetunes it on the natural data.", "before_sent_with_intent": " The approach - tag-less back-translation - trains the model on the synthetic data and finetunes it on the natural data.", "after_sent": " The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the natural data.", "labels": "fluency", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " The approach - tag-less back-translation - trains the model on the synthetic data and finetunes it on the natural data. Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "before_sent_with_intent": " The approach - tag-less back-translation - trains the model on the synthetic data and finetunes it on the natural data. Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "after_sent": " The approach - tag-less back-translation - trains the model on the synthetic data and finetunes it on the authentic data. Experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "labels": "clarity", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "before_sent_with_intent": " Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "after_sent": " Preliminary experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese neural machine translation .", "labels": "meaning-changed", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "before_sent_with_intent": " Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese neural machine translation .", "after_sent": " Preliminary experiments have shown the approach to continuously outperform the tagging approach on low resource English-Vietnamese NMT .", "labels": "clarity", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "before_sent_with_intent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "after_sent": " While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation approach by an average of 0.4 BLEU .", "labels": "clarity", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "before_sent_with_intent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "after_sent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation by 0.4 BLEU .", "labels": "clarity", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "before_sent_with_intent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU .", "after_sent": " While the need for tagging (noising) the dataset has been removed, the approach outperformed the tagged back-translation approach by an average of 0.4 BLEU . The approach reached the best scores in less training time than the standard and tagged back-translation approaches .", "labels": "meaning-changed", "doc_id": "1912.10514", "revision_depth": 1} {"before_sent": "With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training .", "before_sent_with_intent": " With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training .", "after_sent": "With the recent success of the pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training .", "labels": "coherence", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": "With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training .", "before_sent_with_intent": " With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training .", "after_sent": "With the recent success of pre-training technique for NLP and image-linguistic tasks, some video-linguistic pre-training .", "labels": "clarity", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": "With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training . Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "before_sent_with_intent": " With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training . Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "after_sent": "With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training works are gradually developed to improve video-text related downstream tasks. However , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "labels": "meaning-changed", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "before_sent_with_intent": " Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "after_sent": " Besides , most of the existing multimodal models are pre-trained for understanding tasks, leading to a pretrain-finetune discrepency for generation tasks.", "labels": "fluency", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "before_sent_with_intent": " Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks.", "after_sent": " Besides , most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepancy for generation tasks.", "labels": "fluency", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " In this paper , we propose UniViLM : a Unified Video and Language pre-training Model for both multimodal understanding and generation.", "before_sent_with_intent": " In this paper , we propose UniViLM : a Unified Video and Language pre-training Model for both multimodal understanding and generation.", "after_sent": " This paper proposes UniVL : a Unified Video and Language pre-training Model for both multimodal understanding and generation.", "labels": "clarity", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " In this paper , we propose UniViLM : a Unified Video and Language pre-training Model for both multimodal understanding and generation.", "before_sent_with_intent": " In this paper , we propose UniViLM : a Unified Video and Language pre-training Model for both multimodal understanding and generation.", "after_sent": " In this paper , we propose UniViLM : a Unified Video and Language pre-training model for both multimodal understanding and generation.", "labels": "fluency", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone.", "before_sent_with_intent": " Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone.", "after_sent": " It comprises four components, including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone.", "labels": "clarity", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone.", "before_sent_with_intent": " Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone.", "after_sent": " Our model comprises of 4 components including two single-modal encoders, a cross encoder , and a decoder with the Transformer backbone.", "labels": "fluency", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset.", "before_sent_with_intent": " We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset.", "after_sent": " Five objectives, including video-text joint, conditioned masked language model (CMLM), conditioned masked frame model (CMFM), video-text alignment, and language reconstruction, are designed to train each of the components. We further develop two pre-training strategies, stage by stage pre-training (StagedP) and enhanced video representation (EnhancedV), to make the training process of the UniVL more effective. The pre-train our model to learn the universal representation for both video and language on a large instructional video dataset.", "labels": "meaning-changed", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results .", "before_sent_with_intent": " We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results .", "after_sent": " We first pre-train is carried out on a sizeable instructional video dataset HowTo100M. Experimental results demonstrate that the state-of-the art results .", "labels": "clarity", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": " Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results .", "before_sent_with_intent": " Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results .", "after_sent": " Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the UniVL can learn strong video-text representation and achieves state-of-the-art results on five downstream tasks .", "labels": "meaning-changed", "doc_id": "2002.06353", "revision_depth": 2} {"before_sent": "Following each patient visit, physicians must draft detailed clinical summaries called SOAP notes .", "before_sent_with_intent": " Following each patient visit, physicians must draft detailed clinical summaries called SOAP notes .", "after_sent": "Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note .", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " For all the benefits of this documentation the process remains onerous , contributing to increasing physician burnout.", "before_sent_with_intent": " For all the benefits of this documentation the process remains onerous , contributing to increasing physician burnout.", "after_sent": " Despite the benefits of this documentation the process remains onerous , contributing to increasing physician burnout.", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " For all the benefits of this documentation the process remains onerous , contributing to increasing physician burnout.", "before_sent_with_intent": " For all the benefits of this documentation the process remains onerous , contributing to increasing physician burnout.", "after_sent": " For all the benefits of this documentation , their creation remains an onerous process , contributing to increasing physician burnout.", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " In a parallel development, patients increasingly record audio from their visits (with consent), often through dedicated apps. In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "before_sent_with_intent": " In a parallel development, patients increasingly record audio from their visits (with consent), often through dedicated apps. In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "after_sent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "before_sent_with_intent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "after_sent": " In this paper, we present the first study to evaluate complete pipelines to generate these notes .", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "before_sent_with_intent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes .", "after_sent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to train summarization models to generate these notes .", "labels": "meaning-changed", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes . We first describe a unique dataset of patient visit records, consisting of transcripts , paired SOAP notes, and annotations marking noteworthy utterances that support each summary sentence.", "before_sent_with_intent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes . We first describe a unique dataset of patient visit records, consisting of transcripts , paired SOAP notes, and annotations marking noteworthy utterances that support each summary sentence.", "after_sent": " In this paper, we present the first study to evaluate complete pipelines for leveraging these transcripts to train machine learning model to generate these notes from conversations between physicians and patients. We benefit from a dataset that, along with transcripts and paired SOAP notes, and annotations marking noteworthy utterances that support each summary sentence.", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " We first describe a unique dataset of patient visit records, consisting of transcripts , paired SOAP notes, and annotations marking noteworthy utterances that support each summary sentence.", "before_sent_with_intent": " We first describe a unique dataset of patient visit records, consisting of transcripts , paired SOAP notes, and annotations marking noteworthy utterances that support each summary sentence.", "after_sent": " We first describe a unique dataset of patient visit records, consisting of transcripts , paired SOAP notes, consists of annotations marking noteworthy utterances that support each summary sentence.", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " Our best performing method first (i) extracts noteworthy utterances via multi-label classification assigns them to summary section(s);", "before_sent_with_intent": " Our best performing method first (i) extracts noteworthy utterances via multi-label classification assigns them to summary section(s);", "after_sent": " We observe that the performance improves constantly as the extractive subtask is made more complex - an observation that we also replicate on the well-known AMI meeting summarization dataset. Our best performing method first (i) extracts noteworthy utterances via multi-label classification assigns them to summary section(s);", "labels": "meaning-changed", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " Our best performing method first (i) extracts noteworthy utterances via multi-label classification assigns them to summary section(s);", "before_sent_with_intent": " Our best performing method first (i) extracts noteworthy utterances via multi-label classification assigns them to summary section(s);", "after_sent": " Our best performing method first (i) extracts noteworthy utterances via multi-label classification , assigning each to summary section(s);", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by 7 ROUGE-1 points .", "before_sent_with_intent": " Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by 7 ROUGE-1 points .", "after_sent": " Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points .", "labels": "meaning-changed", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": " Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by 7 ROUGE-1 points . Oracle experiments indicate that fixing our generative capabilities, improvements in extraction alone could provide (up to) a further 9 ROUGE point gain .", "before_sent_with_intent": " Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by 7 ROUGE-1 points . Oracle experiments indicate that fixing our generative capabilities, improvements in extraction alone could provide (up to) a further 9 ROUGE point gain .", "after_sent": " Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by 7 ROUGE-1 points .", "labels": "clarity", "doc_id": "2005.01795", "revision_depth": 1} {"before_sent": "This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "before_sent_with_intent": " This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "after_sent": "We present a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": "This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "before_sent_with_intent": " This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "after_sent": "This paper presents a new method SOLOIST that uses transfer learning to efficiently build task-oriented dialog systems at scale.", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": "This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "before_sent_with_intent": " This paper presents a new method SOLOIST , which uses transfer learning to efficiently build task-oriented dialog systems at scale.", "after_sent": "This paper presents a new method SOLOIST , which uses transfer learning and machine teaching to build task bots at scale.", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model.", "before_sent_with_intent": " We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model.", "after_sent": " We parameterize classical modular task-oriented dialog systems using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model.", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model.", "before_sent_with_intent": " We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator) into a single neural model.", "after_sent": " We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules into a single neural model.", "labels": "coherence", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion.", "before_sent_with_intent": " We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion.", "after_sent": " We pre-train, on heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion.", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion.", "before_sent_with_intent": " We pre-train, on large heterogeneous dialog corpora, a large-scale Transformer model which can generate dialog responses grounded in user goals and real-world knowledge for task completion.", "after_sent": " We pre-train, on large heterogeneous dialog corpora, a task-grounded response generation model, which can generate dialog responses grounded in user goals and real-world knowledge for task completion.", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching .", "before_sent_with_intent": " The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching .", "after_sent": " The pre-trained model can be efficiently adapted to accomplish new tasks with a handful of task-specific dialogs via machine teaching .", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching . Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "before_sent_with_intent": " The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching . Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "after_sent": " The pre-trained model can be efficiently adapted to accomplish a new dialog task with a handful of task-specific dialogs via machine teaching , where training samples are generated by human teachers interacting with the system. Experiments show that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "labels": "meaning-changed", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "before_sent_with_intent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "after_sent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks, including CamRest676 and MultiWOZ; (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "before_sent_with_intent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "after_sent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot fine-tuning settings, SOLOIST significantly outperforms existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost .", "labels": "clarity", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost . We will release our code and pre-trained models for reproducible research.", "before_sent_with_intent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost . We will release our code and pre-trained models for reproducible research.", "after_sent": " Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost of fine-tuning. The pre-trained models for reproducible research.", "labels": "coherence", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": " We will release our code and pre-trained models for reproducible research.", "before_sent_with_intent": " We will release our code and pre-trained models for reproducible research.", "after_sent": " We will release our code and pre-trained models and codes are available at URL", "labels": "meaning-changed", "doc_id": "2005.05298", "revision_depth": 2} {"before_sent": "Stance detection on social media is an emerging opinion mining paradigm for various social and political applications where sentiment analysis might be sub-optimal.", "before_sent_with_intent": " Stance detection on social media is an emerging opinion mining paradigm for various social and political applications where sentiment analysis might be sub-optimal.", "after_sent": "Stance detection on social media is an emerging opinion mining paradigm for various social and political applications in which sentiment analysis may be sub-optimal.", "labels": "clarity", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media.", "before_sent_with_intent": " This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media.", "after_sent": " There has been a growing research interest for developing effective methods for stance detection methods varying among multiple communities including natural language processing, web science, and social computing. This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media.", "labels": "meaning-changed", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media.", "before_sent_with_intent": " This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media.", "after_sent": " This paper surveys the work on stance detection within those communities and situates its usage within current opinion mining techniques in social media.", "labels": "meaning-changed", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "before_sent_with_intent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "after_sent": " It presents an exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "labels": "coherence", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "before_sent_with_intent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "after_sent": " An exhaustive review of stance detection techniques on social media , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "labels": "coherence", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "before_sent_with_intent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "after_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "labels": "fluency", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "before_sent_with_intent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "after_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, features set used, and the various machine learning approaches applied.", "labels": "clarity", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "before_sent_with_intent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.", "after_sent": " An exhaustive review of stance detection techniques on social media is presented , including the task definition, the different types of targets in stance detection, the features set used, and various machine learning approaches applied.", "labels": "fluency", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " The survey reports the state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches.", "before_sent_with_intent": " The survey reports the state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches.", "after_sent": " The survey reports state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches.", "labels": "fluency", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " In addition, this study explores the emerging trends and the different applications of stance detection on social media.", "before_sent_with_intent": " In addition, this study explores the emerging trends and the different applications of stance detection on social media.", "after_sent": " In addition, this study explores the emerging trends and different applications of stance detection on social media.", "labels": "clarity", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " The study concludes by providing discussion of the gaps in the current existing research and highlighting the possible future directions for stance detection on social media.", "before_sent_with_intent": " The study concludes by providing discussion of the gaps in the current existing research and highlighting the possible future directions for stance detection on social media.", "after_sent": " The study concludes by discussing the gaps in the current existing research and highlighting the possible future directions for stance detection on social media.", "labels": "clarity", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": " The study concludes by providing discussion of the gaps in the current existing research and highlighting the possible future directions for stance detection on social media.", "before_sent_with_intent": " The study concludes by providing discussion of the gaps in the current existing research and highlighting the possible future directions for stance detection on social media.", "after_sent": " The study concludes by providing discussion of the gaps in the current existing research and highlights the possible future directions for stance detection on social media.", "labels": "fluency", "doc_id": "2006.03644", "revision_depth": 2} {"before_sent": "Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "before_sent_with_intent": " Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "after_sent": "Recently, knowledge graph (KG) augmented models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "labels": "meaning-changed", "doc_id": "2010.12873", "revision_depth": 2} {"before_sent": "Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "before_sent_with_intent": " Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "after_sent": "Recently, neural-symbolic models have achieved noteworthy success on various commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "labels": "clarity", "doc_id": "2010.12873", "revision_depth": 2} {"before_sent": "Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "before_sent_with_intent": " Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "after_sent": "Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "labels": "clarity", "doc_id": "2010.12873", "revision_depth": 2} {"before_sent": "Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks , like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge .", "before_sent_with_intent": "