_id
stringlengths
1
5
task
stringclasses
6 values
src
stringlengths
22
884
tgt
stringlengths
1
697
68101
clarity
Clarify this paragraph: constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes, reclassifies the sentences types from the perspective of task types and data reading modes. Then, simulated the understanding process (the information processing process) on a dialogue example. Finally, the author summarizes the basic conditions of understanding and gives out the definition of understanding from a personal point of view. The study in this paper provides a practical, theoretical basis and research methods for NLU.It also can be applied in large-scale, multi-type information processing in the artificial intelligence (AI) area.
constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences...
68102
clarity
Write a readable version of the sentence: Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
Knowledge graphs (KGs) have helped neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
68103
clarity
Rewrite this sentence clearly: Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems. Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners.
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and item recommendation. By using attention over the KG, such models can also " insights " to practitioners.
68104
clarity
Clarify: Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners.
Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " explain " to practitioners.
68105
clarity
Rewrite this sentence clearly: In this paper, we question the faithfulness of such symbolic explanations.
In this paper, we question whether these models are really behaving as we expect.
68106
clarity
Clarify this text: We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics.
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics.
68107
clarity
Clarify this sentence: We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics.
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original structure while significantly deviating from the original semantics.
68108
clarity
Make the sentence clear: We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics. In particular, we train a reinforcement learning policy to manipulate relation types or edge connections in a knowledge graph, such that the resulting downstream performance is maximally preserved. Across multiple models and tasks, our approach drastically alters knowledge graphs with little to no drop in performance. These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge.
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics and structure. Our findings raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge.
68109
clarity
Rewrite the sentence more clearly: These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge.
These results raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
68110
clarity
Make the text more understandable: Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
68111
clarity
Rewrite the sentence more clearly: We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
68112
clarity
Make the text more understandable: We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure.
68113
clarity
Write a better readable version of the sentence: Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and provide plausible explanations.
68114
clarity
Rewrite this sentence for clarity: Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
Our findings raise doubts about KG-augmented models' ability to leverage KG information and give sensible explanations.
68115
clarity
Rewrite this sentence for clarity: Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.
Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.
68116
clarity
Write a readable version of the sentence: Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) for commonsense reasoning tasks, like question answering (QA).
68117
clarity
Write a readable version of the sentence: However, these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limited expressiveness of its relations, and retrieved facts irrelevant to the reasoning context. In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG), determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.
However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge. To address these issues, we propose Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG), determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.
68118
clarity
Clarify: We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies.
We show marked improvement on three commonsense reasoning benchmarks and a user study of fact validness and helpfulness.
68119
clarity
Make this easier to read: It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models.
It has a long history in the field of natural language processing (NLP), and recently has re-gained significant attention thanks to the promising performance brought by deep learning models.
68120
clarity
Make this sentence better readable: It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models.
It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep neural models.
68121
clarity
Clarify this text: Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
We discuss the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
68122
clarity
Clarify this paragraph: Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
Overall, we have covered the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data.
68123
clarity
Change to clearer wording: We also provide discussions a variety of important topics regarding TST, which can shed light on new development in this field.
We also provide discussions a variety of important topics regarding the future development of TST.
68124
clarity
Clarify: Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements over strong baseline methods.
68125
clarity
Make this easier to read: We also release our annotated data and code for future researchat URL
We will release our annotated data and code for future researchat URL
68126
clarity
Clarify this sentence: Recently, contextualized word embeddings outperform static word embeddings on many NLP tasks. However, we still do not know much about the mechanism inside these representations. Do they have any common patterns? If so, where do these patterns come from? We find that almost all the contextualized word vectors of BERT and RoBERTa have a commonpattern.
In this work, we demonstrate that the contextualized word vectors of BERT and RoBERTa have a commonpattern.
68127
clarity
Write a better readable version of the sentence: For BERT, the 557^{th neuron-level method to analyze where these "tails" come from. We find that these "tails" are closely related to the positional information.
For BERT, the 557^{th neuron-level analysis method, which reveals that the outliers are closely related to the positional information.
68128
clarity
Clarify this paragraph: We find that these "tails" are closely related to the positional information.
We find that these "tails" are closely related to information captured by positional embeddings.
68129
clarity
Clarify this text: Our theory provides precise reasons answering why or why not a triple is correct.
Our theory provides precise reasons explaining why or why not a triple is correct.
68130
clarity
Write a better readable version of the sentence: Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
68131
clarity
Write a clarified version of the sentence: Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion, including achieving a Hits@10 score of 80.38 percent on WN18RR.
68132
clarity
Clarify the sentence: Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion, including achieving a Hits@10 score of 96.6 percent on WN18RR.
Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion. Especially, our model achieves a Hits@10 score of 96.6 percent on WN18RR.
68133
clarity
Write a clearer version for the sentence: Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score.
Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows for a unified control flow for the host program to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score.
68134
clarity
Clarify this paragraph: Many online comments on social media platforms are hateful, humorous, or sarcastic.
Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic, humorous, or sarcastic.
68135
clarity
Make the sentence clear: Many online comments on social media platforms are hateful, humorous, or sarcastic. The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models.
Many online comments on social media platforms are hateful, humorous, or hateful. This sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models.
68136
clarity
Write a clearer version for the sentence: The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm. Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter).
The proposed phrases responsible for invoking sarcasm. Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter).
68137
clarity
Use clearer wording: The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm. Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context.
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset.
68138
clarity
Clarify this text: Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic, humorous, or hateful.
Many online comments on social media platforms are hateful, humorous, or hateful.
68139
clarity
Make this sentence better readable: This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several research has already been done in this field.
This sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done in this field.
68140
clarity
Write a better readable version of the sentence: Several research has already been done in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm. The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information.
Several research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not the intra sentence contextual information.
68141
clarity
Change to clearer wording: So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism. The proposed model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset.
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words.
68142
clarity
Clarification: Moreover, we propose an alignment strategy to tackle the label inconsistency during clustering assignments.
Moreover, we propose an alignment strategy to tackle the label inconsistency problem during clustering assignments.
68143
clarity
Make this sentence readable: ( Code available at URL
( The code is available at URL
68144
clarity
Make this sentence better readable: Natural Language Processing (NLP) systems learn harmful societal biases that cause them to extend and proliferate inequality widely, as they are deployed in more and more situations.
Natural Language Processing (NLP) systems learn harmful societal biases that cause them to widely proliferate inequality as they are deployed in more and more situations.
68145
clarity
Rewrite this sentence clearly: To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models, which are used to monitor model behaviour and to guide efforts at debiasing.
To address and combat this, the NLP community relies on a variety of metrics to identify and quantify bias in black-box models, which are used to monitor model behaviour and to guide efforts at debiasing.
68146
clarity
Change to clearer wording: This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias).
This research examines whether easy-to-measure intrinsic metrics correlate well to extrinsic metrics (which reflect real world bias).
68147
clarity
Clarification: This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias).
This research examines whether intrinsic metrics (which are easy to measure) correlate well to real world extrinsic metrics.
68148
clarity
Make the text more understandable: We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in more than extremely specific settings.
We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in all scenarios across tasks and languages.
68149
clarity
Make the sentence clear: We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement simpler and easier.
We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community increase effort into making downstream measurement simpler and easier.
68150
clarity
Make the sentence clear: However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center.
However, their memory footprint, inference latency, and power consumption are prohibitive efficient inference at the edge, and even at the data center.
68151
clarity
Make this easier to read: The model consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level representations from the speech.
The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the speech.
68152
clarity
Rewrite this sentence for clarity: The encoder extracts high-level representations from the speech. The PDS uses positional encodings corresponding to tokensto convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance.
The encoder extracts high-level representations from the whole speech signal rather than autoregressive modeling on tokens. Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance.
68153
clarity
Use clearer wording: As a result, these models could potentially fail to generalize to real-world out-of-distribution scenarios.
As a result, these models fail to generalize to real-world out-of-distribution scenarios.
68154
clarity
Rewrite this sentence for readability: As a result, these models could potentially fail to generalize to real-world out-of-distribution scenarios.
As a result, these models could potentially fail to generalize to real-world out-of-distribution data.
68155
clarity
Use clearer wording: In this work, we show that the shortcut learning behavior can be explained by the long-tailed phenomenon.
In this work, we show that the words in the NLU training set can be modeled as a long-tailed phenomenon.
68156
clarity
Clarify this paragraph: In this work, we show that the shortcut learning behavior can be explained by the long-tailed phenomenon.
In this work, we show that the shortcut learning behavior can be explained by the long-tailed distribution.
68157
clarity
Write a clarified version of the sentence: There are two findings: 1) Trained NLU models have strong preference for features located at the head of the long-tailed distribution, and 2) Shortcut features are picked up during very early few iterations of the model training.
There are two findings: 1) NLU models have strong preference for features located at the head of the long-tailed distribution, and 2) Shortcut features are picked up during very early few iterations of the model training.
68158
clarity
Make this sentence more readable: Experimental analysis further indicates that our method can improve the generalization accuracy on OOD data, while preserving the accuracy on in distribution test data.
Experimental analysis further indicates that LGTR can improve the generalization accuracy on OOD data, while preserving the accuracy on in distribution test data.
68159
clarity
Clarify the sentence: Experimental analysis further indicates that our method can improve the generalization accuracy on OOD data, while preserving the accuracy on in distribution test data.
Experimental analysis further indicates that our method can improve the generalization accuracy on OOD data, while preserving the accuracy on in-distribution data.
68160
clarity
Use clearer wording: Previous works on expressive text-to-speech (TTS) have a limitation on robustness and speed when training and inferring. Such drawbacks mostly come from autoregressive decoding, which makes the succeeding step vulnerable to preceding error. To overcome this weakness, we propose STYLER, a novel expressive text-to-speech model with parallelized architecture.
Previous works on expressive text-to-speech (TTS) have been tackled on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability. Although several approaches resolve some limitations, none of them has resolved all weaknesses at once. In this paper, we propose STYLER, a novel expressive text-to-speech model with parallelized architecture.
68161
clarity
Clarification: To overcome this weakness, we propose STYLER, a novel expressive text-to-speech model with parallelized architecture.
To overcome this weakness, we propose STYLER, an expressive and controllable text-to-speech model with parallelized architecture.
68162
clarity
Clarify: To overcome this weakness, we propose STYLER, a novel expressive text-to-speech model with parallelized architecture. Expelling autoregressive decoding and introducing speech decomposition for encoding enables speech synthesis more robust even with high style transfer performance.
To overcome this weakness, we propose STYLER, a novel expressive text-to-speech model with robust speech synthesis and high speed. Excluding autoregressive decoding and introducing speech decomposition for encoding enables speech synthesis more robust even with high style transfer performance.
68163
clarity
Make the sentence clear: Expelling autoregressive decoding and introducing speech decomposition for encoding enables speech synthesis more robust even with high style transfer performance.
Expelling autoregressive decoding and introducing speech decomposition for encoding enables speech synthesis more robust on long, unseen data. Disentangled style factor modeling under supervision enlarges the controllability of synthesizing speech with fruitful expressivity.
68164
clarity
Rewrite this sentence for clarity: Moreover, our novel noise modeling approach from audio using domain adversarial training and Residual Decoding enabled style transferwithout transferring noise.
Moreover, our novel noise modeling pipeline using domain adversarial training and Residual Decoding enabled style transferwithout transferring noise.
68165
clarity
Clarify: Previous works on neural text-to-speech (TTS) have been tackled on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability.
Previous works on neural text-to-speech (TTS) have been addressed on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability.
68166
clarity
Clarify this sentence: Although several approaches resolve some limitations, none of them has resolved all weaknesses at once.
Although several approaches resolve some limitations, there has been no attempt to solve all weaknesses at once.
68167
clarity
Change to clearer wording: In this paper, we propose STYLER, an expressive and controllable text-to-speech model with robust speech synthesisand high speed. Excluding autoregressive decoding and introducing a novel audio-text aligning method called Mel Calibrator leads speech synthesis more robust on long, unseen data.
In this paper, we propose STYLER, an expressive and controllable TTS framework with high-speed and robust synthesis. Our novel audio-text aligning method called Mel Calibrator leads speech synthesis more robust on long, unseen data.
68168
clarity
Improve this sentence for readability: Excluding autoregressive decoding and introducing a novel audio-text aligning method called Mel Calibrator leads speech synthesis more robust on long, unseen data.
Excluding autoregressive decoding and introducing a novel audio-text aligning method called Mel Calibrator and excluding autoregressive decoding enable rapid training and inference and robust synthesis on unseen data.
68169
clarity
Use clearer wording: Disentangled style factor modeling under supervision enlarges the controllability of synthesizing speech with fruitful expressivity. Moreover, our novel noise modeling pipeline using domain adversarial training and Residual Decoding enables noise-robust style transfer, decomposing the noise without any additional label.
Disentangled style factor modeling under supervision enlarges the controllability in synthesizing process leading to expressive TTS. On top of it, a novel noise modeling pipeline using domain adversarial training and Residual Decoding enables noise-robust style transfer, decomposing the noise without any additional label.
68170
clarity
Make the sentence clear: Moreover, our novel noise modeling pipeline using domain adversarial training and Residual Decoding enables noise-robust style transfer, decomposing the noise without any additional label.
Moreover, our novel noise modeling pipeline using domain adversarial training and Residual Decoding empowers noise-robust style transfer, decomposing the noise without any additional label.
68171
clarity
Rewrite this sentence clearly: Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed, robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies. Synthesis samples of our model and experiment results are provided via our demo page.
Various experiments demonstrate that STYLER is more effective in speed and robustness than expressive TTS with autoregressive decoding and more expressive and controllable than reading style non-autoregressive TTS. Synthesis samples of our model and experiment results are provided via our demo page.
68172
clarity
Rewrite this sentence for clarity: Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed, robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies. Synthesis samples of our model and experiment results are provided via our demo page.
Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed, robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies. Synthesis samples and experiment results are provided via our demo page.
68173
clarity
Improve this sentence for readability: In this paper, we describe a corpus annotation process, which was guided by a linguist, and a hate speech skilled to support the identification of hate speech and offensive language on social media.
In this paper, we describe a corpus annotation process proposed by a linguist, and a hate speech skilled to support the identification of hate speech and offensive language on social media.
68174
clarity
Make this sentence readable: In addition, we provide the first robust corpus of this kind for the Brazilian Portuguese language.
In addition, we provide the first robust dataset of this kind for the Brazilian Portuguese language.
68175
clarity
Rewrite this sentence clearly: This paper presents a new approach for offensive language and hate speech detection on social media.
This paper provides a new approach for offensive language and hate speech detection on social media.
68176
clarity
Make this sentence better readable: In this paper, we present a new Vietnamese corpus for conversational machine reading comprehension (ViCoQA ), consisting of 10,000 questions with answers over 2,000 conversations about health news articles.
In this paper, we present a new Vietnamese corpus for conversational machine reading comprehension (UIT-ViCoQA ), consisting of 10,000 questions with answers over 2,000 conversations about health news articles.
68177
clarity
Clarify this text: We analyze ViCoQA in depth with different linguistic aspects.
We analyze UIT-ViCoQA in depth with different linguistic aspects.
68178
clarity
Change to clearer wording: Then, we evaluate several baseline models about dialogue and reading comprehension on the ViCoQA corpus.
Then, we evaluate several baseline models about dialogue and reading comprehension on the UIT-ViCoQA corpus.
68179
clarity
Rewrite this sentence for readability: Machine reading comprehension (MRC) is a sub-field in natural language processing or computational linguistics. MRC aims to help computers understand unstructured texts and then answer questions related to them.
Machine reading comprehension (MRC) is a sub-field in natural language processing which aims to help computers understand unstructured texts and then answer questions related to them.
68180
clarity
Clarification: Then, we evaluate several baseline models about dialogue and reading comprehension on the UIT-ViCoQA corpus.
Then, we evaluate several baseline approaches for conversational machine comprehension on the UIT-ViCoQA corpus.
68181
clarity
Make the sentence clear: He also reaffirmed his stance on participation of Serbs in Kosovo's institutions: "For now, there is no room for Serbian representatives in institutions of Kosovo."
Kotunica also reaffirmed his stance on participation of Serbs in Kosovo's institutions: "For now, there is no room for Serbian representatives in institutions of Kosovo."
68182
clarity
Make this easier to read: With smaller size than movies and broadband access, popular shows often appear within hours of airing on TV.
With the increasing ubiquity of broadband Internet access, popular shows often appear within hours of airing on TV.
68183
clarity
Clarify this sentence: CEO of the MPAA, Dan Glickman told the BBC "Since we began shutting these sites down, the time that it takes to download a file on BitTorrent has increased exponentially which means the experience of downloading copyrighted films and TV shows is not what it used to be.
MPAA CEO Dan Glickman told the BBC "Since we began shutting these sites down, the time that it takes to download a file on BitTorrent has increased exponentially which means the experience of downloading copyrighted films and TV shows is not what it used to be.
68184
clarity
Write a better readable version of the sentence: According to GONG, an NGO observing the elections, Milan Bandić will be able to form a government in the city of Zagreb, since his list has won around 46\%.
According to GONG, an NGO observing the elections, Milan Bandić, can form a government in the city of Zagreb, since his list has won around 46\%.
68185
clarity
Clarify the sentence: This would give them 27 of 51 seats in the capital city.
This would give the coalition 27 of 51 seats in the capital city.
68186
clarity
Make this sentence readable: The twelfth-annual Asia Pacific Regional Internet Conference on Operational Technologies (a.k.a APRICOT), returned to Taiwan this year at Taipei Howard Plaza Hotel ;
The twelfth-annual Asia Pacific Regional Internet Conference on Operational Technologies (a.k.a APRICOT), returned to Taiwan this year at the Taipei Howard Plaza Hotel ;
68187
clarity
Make the sentence clear: As Wikinews Journalist Rico Shen reported on the recent "Edison Chen photo scandal" incident, he commented: Workshops with varied topics and different technology levels took place from February 20 to 24, while several main seminars and speeches for industry, governmental, and academic executives ran from February 25 to 29.
When Wikinews journalist Rico Shen reported on the recent "Edison Chen photo scandal" incident, he commented: Workshops with varied topics and different technology levels took place from February 20 to 24, while several main seminars and speeches for industry, governmental, and academic executives ran from February 25 to 29.
68188
clarity
Clarify this text: Several industry experts such as Wilfred Kwan (Chief Technology Officer of AsiaNetCom), Chung-laung Liu (ISOC Taiwan Chapter Chair ), and Maemura Akinori (EC Chair of APNIC) will give several speeches related to the Internet industry at the conference.
Several industry experts such as Wilfred Kwan (Chief Technology Officer of AsiaNetCom), Chung-laung Liu (Taiwan Chapter Chair ), and Maemura Akinori (EC Chair of APNIC) will give several speeches related to the Internet industry at the conference.
68189
clarity
Write a readable version of the sentence: ZipcodeZoo offers over 3 million web pages describing species of plants and animals. Pages contain 258, 753 photostaken by 1, 369 photographers, 1, 104 sound recordings, and definitions of 234, 888 terms.
ZipcodeZoo offers over 3 million species of plants and animals. Pages contain 258, 753 photostaken by 1, 369 photographers, 1, 104 sound recordings, and definitions of 234, 888 terms.
68190
clarity
Rewrite the sentence more clearly: Oil in productionCrude oil prices in New York rose to a new record of $102.59 per barrel on Thursday, although the figure increased even more during after hours trading.
An oil refineryCrude oil prices in New York rose to a new record of $102.59 per barrel on Thursday, although the figure increased even more during after hours trading.
68191
clarity
Make this sentence readable: In less than a month, prices have risen $10, leading the figures to above the record highs set during the 1980s (taking inflation into account).
In less than a month, prices have risen $10, leading the inflation adjusted prices above the record highs set during the 1980s (taking inflation into account).
68192
clarity
Write a better readable version of the sentence: In less than a month, prices have risen $10, leading the figures to above the record highs set during the 1980s (taking inflation into account).
In less than a month, prices have risen $10, leading the figures to above the record highs set during the 1980s.
68193
clarity
Clarify the sentence: The weak dollar is seen as a major cause of this rise. Congressman Ron Paul of Texas, pointed out to Federal Reserve chairman Ben Bernanke in a committee meeting this week that despite the price of oil's rapid ascent, it had remained flat when compared to the price of gold.
While the weak dollar is seen as a major cause of this rise. Congressman Ron Paul of Texas, pointed out to Federal Reserve chairman Ben Bernanke in a committee meeting this week that despite the price of oil's rapid ascent, it had remained flat when compared to the price of gold.
68194
clarity
Write a readable version of the sentence: The weak dollar is seen as a major cause of this rise. Congressman Ron Paul of Texas, pointed out to Federal Reserve chairman Ben Bernanke in a committee meeting this week that despite the price of oil's rapid ascent, it had remained flat when compared to the price of gold.
The weak dollar is seen as a major cause of this rise. Congressman Ron Paul of Texas, pointed out to Federal Reserve chairman Ben Bernanke in a committee meeting this week that despite the price of oil, it had remained flat when compared to the price of gold.
68195
clarity
Make this sentence readable: A graph to show the increase in gasoline pricesThere have also been suggestions that reports of a fire at a National Gas Terminal may have contributed to the rising oil price.
There have also been suggestions that reports of a fire at a National Gas Terminal may have contributed to the rising oil price.
68196
clarity
Clarify this sentence: Time Evans from Citigroup Futures has stated he believes that this fire at the UK natural gas terminal is creating a strong push in the European market, and that is translating here [the US]."
Time Evans from Citigroup Futures said he believes " that this fire at the UK natural gas terminal is creating a strong push in the European market, and that is translating here [the US]."
68197
clarity
Clarify this sentence: 10 other cases of the virus have appeared in dead birds, all Mute Swans from the same area.
Ten other cases of the virus have appeared in dead birds, all Mute Swans from the same area.
68198
clarity
Write a clarified version of the sentence: Interpol has released an "Orange Notice" for the Southeast Asian Jemaah Islamiyah terrorist group leader Mas Selamat bin Kastari, who escaped from a detention center Wednesday.
Interpol has issued an "Orange Notice" for the Southeast Asian Jemaah Islamiyah terrorist group leader Mas Selamat bin Kastari, who escaped from a detention center Wednesday.
68199
clarity
Rewrite this sentence clearly: Interpol has released an "Orange Notice" for the Southeast Asian Jemaah Islamiyah terrorist group leader Mas Selamat bin Kastari, who escaped from a detention center Wednesday.
Interpol has released an "Orange Notice" for the leader of southeast Asian Jemaah Islamiyah terrorist group leader Mas Selamat bin Kastari, who escaped from a detention center Wednesday.
68200
clarity
Make this easier to read: Not only professional runners from Asian European countries participated in this race, several enterprises, governmental and academic teams all supported this race by participating in 6K Fun Run classes.
Not only professional runners from Asian European countries participated in this race, several enterprises, governmental and academic teams also supported this race by participating in 6K Fun Run classes.