_id
stringlengths
1
5
task
stringclasses
6 values
src
stringlengths
22
884
tgt
stringlengths
1
697
67901
clarity
Make the sentence clear: To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding.
To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding.
67902
clarity
Rewrite this sentence for clarity: Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas.
Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations.
67903
clarity
Improve this sentence for readability: Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues.
Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues.
67904
clarity
Clarify this paragraph: Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues.
Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations.
67905
clarity
Write a better readable version of the sentence: In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding.
In this paper, we propose a novel approach for detecting humor in short texts using BERT sentence embedding.
67906
clarity
Make this sentence readable: Our proposed model uses BERT to generate tokens and sentence embedding for texts.
Our proposed method uses BERT to generate tokens and sentence embedding for texts.
67907
clarity
Make the text more understandable: Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts the target value.
Our proposed model uses BERT to generate embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict the target value.
67908
clarity
Write a better readable version of the sentence: For evaluation, we created a new dataset for humor detection consisting of 200k formal short texts (100k positive, 100k negative).
For evaluation purposes, we created a new dataset for humor detection consisting of 200k formal short texts (100k positive, 100k negative).
67909
clarity
Use clearer wording: Experimental results show an accuracy of 98.1 percent for the proposed method, 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model.
Experimental results show that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models.
67910
clarity
Make this easier to read: Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER), Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis.
Multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis.
67911
clarity
Clarify: In this work, we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks.
In this paper, we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks.
67912
clarity
Make this sentence more readable: Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL
Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at URL
67913
clarity
Write a better readable version of the sentence: We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning.
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models through transfer learning.
67914
clarity
Rewrite this sentence for clarity: We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic, structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic, structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
67915
clarity
Improve this sentence for readability: We train LSTMs on non-linguistic, structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We train LSTMs on non-linguistic data and evaluate their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
67916
clarity
Make the sentence clearer: We train LSTMs on non-linguistic, structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We train LSTMs on non-linguistic, structured data and test their performance on natural language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
67917
clarity
Rewrite this sentence for readability: We train LSTMs on non-linguistic, structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We train LSTMs on non-linguistic, structured data and test their performance on human language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language.
67918
clarity
Clarify the sentence: Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
Further experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
67919
clarity
Make this sentence better readable: Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap. This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies.
Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, suggesting that representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies.
67920
clarity
Rewrite the sentence more clearly: Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kinds of structural biases that give learners the ability to model language.
67921
clarity
Make this sentence readable: Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
Our results provide insights into how neural networks represent linguistic structure, and also about the kind of structural inductive biases which a learner needs to model language.
67922
clarity
Write a readable version of the sentence: Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
67923
clarity
Clarify this paragraph: Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology.
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language. Further experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology.
67924
clarity
Clarify this sentence: Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology.
Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties.
67925
clarity
Make this sentence better readable: We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset, and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations.
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset, and our intervention experiments bolster this, showing that the causal dynamics of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations.
67926
clarity
Make this sentence better readable: We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
67927
clarity
Clarify this paragraph: Our dataset will be made publicly available at URL
Our results and experiments strongly suggest that our new task and data will support significant future research efforts.
67928
clarity
Rewrite this sentence for readability: One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment.
One key principle for assessing textual similarity is measuring the degree of semantic overlap of them by considering word-by-word alignment.
67929
clarity
Rewrite the sentence more clearly: One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
67930
clarity
Make this sentence better readable: To solve this, we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction, then compute the alignment-based similarity with the help of earth mover's distance.
To solve this, we propose to decouple word vectors into their norm and direction, then compute the alignment-based similarity with the help of earth mover's distance.
67931
clarity
Clarification: To solve this, we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction, then compute the alignment-based similarity with the help of earth mover's distance.
To solve this, we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction then computing the alignment-based similarity with the help of earth mover's distance.
67932
clarity
Rewrite this sentence clearly: To solve this, we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction, then compute the alignment-based similarity with the help of earth mover's distance.
To solve this, we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction, then compute the alignment-based similarity using earth mover's distance.
67933
clarity
Write a readable version of the sentence: We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark.
We call the method word rotator's distance. Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter); this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method. On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
67934
clarity
Change to clearer wording: One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
67935
clarity
Make this sentence more readable: Such alignment-based approaches are both intuitive and interpretable;
Such alignment-based approaches are intuitive and interpretable;
67936
clarity
Write a better readable version of the sentence: To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
To address this issue, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
67937
clarity
Make this easier to read: To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
To remedy this, we focus on and demonstrate the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
67938
clarity
Make this sentence better readable: To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity.
67939
clarity
Clarify this paragraph: Alignment-based approaches do not distinguish the norm and direction, whereas sentence-vector approaches automatically use the norm as the word importance.
Alignment-based approaches do not distinguish them, whereas sentence-vector approaches automatically use the norm as the word importance.
67940
clarity
Clarify this sentence: Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance (optimal transport cost), which we refer to as word rotator's distance.
Accordingly, we propose a method that first decouples word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance (optimal transport cost), which we refer to as word rotator's distance.
67941
clarity
Make this sentence more readable: Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance (optimal transport cost), which we refer to as word rotator's distance.
Accordingly, we propose to decouple word vectors into their norm and direction, and then computes alignment-based similarity using earth mover's distance (optimal transport cost), which we refer to as word rotator's distance.
67942
clarity
Write a better readable version of the sentence: this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method.
this is a new systematic approach derived from the sentence-vector estimation methods.
67943
clarity
Rewrite this sentence for readability: On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
On several textual similarity datasets, the combination of these simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
67944
clarity
Make the sentence clearer: We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related tasks of extreme summarization and title generation, which outperforms strong extractive and abstractive summarization baselines.
We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines.
67945
clarity
Write a readable version of the sentence: We introduce TLDR generation for scientific papers, a new automatic summarizationtask with high source compression, requiring expert background knowledge and complex language understanding.
We introduce TLDR generation, a new automatic summarizationtask with high source compression, requiring expert background knowledge and complex language understanding.
67946
clarity
Use clearer wording: We introduce TLDR generation for scientific papers, a new automatic summarizationtask with high source compression, requiring expert background knowledge and complex language understanding.
We introduce TLDR generation for scientific papers, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression, requiring expert background knowledge and complex language understanding.
67947
clarity
Write a readable version of the sentence: To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
To facilitate study on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
67948
clarity
Rewrite this sentence for clarity: Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations.
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations.
67949
clarity
Clarify this sentence: Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness.
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness.
67950
clarity
Make the sentence clear: Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness.
Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness.
67951
clarity
Make the sentence clearer: We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities, while also providing a probing set to test robustness under several linguistic variations--code and data will be released.
We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning), are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve PTLMs' inference abilities, while also providing a probing set to test robustness under several linguistic variations--code and data will be released.
67952
clarity
Change to clearer wording: Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated.
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to communicate with humans is fiercely debated.
67953
clarity
Make the text more understandable: Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing.
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.
67954
clarity
Make this sentence readable: In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work, we develop a systematic procedure to probe PTLMs across three different evaluation settings.
In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates robust commonsense inference despite textual perturbations. To generate data for this challenge, we develop a systematic procedure to probe PTLMs across three different evaluation settings.
67955
clarity
Clarification: Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning), are heavily impacted by statistical biases, and are not robust to perturbation attacks.
Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing on the zero-shot setting, are heavily impacted by statistical biases, and are not robust to perturbation attacks.
67956
clarity
Use clearer wording: Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing physician burnout.
Following each patient visit, physicians draft long semi-structured clinical summaries called SOAP notes. While invaluable to clinicians and researchers, creating digital SOAP notes is burdensome, contributing to physician burnout.
67957
clarity
Use clearer wording: In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
In this paper, we introduce the first complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
67958
clarity
Clarify the sentence: In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes based on transcripts of conversations between physicians and patients.
67959
clarity
Write a better readable version of the sentence: Our best performing method first (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ;
Our best performing method first (i) extracts important utterances relevant to each summary section ;
67960
clarity
Make this sentence readable: (ii) clusters noteworthy utteranceson a per-section basis; and (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated.
(ii) clusters together related utterances; and then (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated.
67961
clarity
Make the sentence clear: To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
67962
clarity
Change to clearer wording: To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
The outbreak of COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
67963
clarity
Improve this sentence for readability: To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses open-domain question answering (QA) techniques combined with summarization for mining the available scientific literature.
67964
clarity
Rewrite this sentence for readability: Our system leverages the Information Retrieval (IR) system and QA models to extract relevant snippets from the existing literature given a query.
It leverages the Information Retrieval (IR) system and QA models to extract relevant snippets from the existing literature given a query.
67965
clarity
Clarify the sentence: To bootstrap the further study, the code for our system is available at URL
To bootstrap the further study, the code for our system is also open-sourced to bootstrap further study.
67966
clarity
Write a readable version of the sentence: While many scientific articles have been published, a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current time-critical race to treat patients and to find a cure for the virus. To address the requests, we propose our CAiRE-COVID, a neural-based system that uses open-domain question answering (QA) techniques combined with summarization techniques for mining the available scientific literature. It leverages the Information Retrieval (IR) system and QA models to extract relevant snippets from existing literature given a query.
While many scientific articles have been published, a system that can provide reliable information to COVID-19 related questions from the community and summarizing salient question-related information. It combines information extraction with state-of-the-art QA and query-focused multi-document summarization techniques, selecting and highlighting evidence snippets from existing literature given a query.
67967
clarity
Make the sentence clearer: The code for our system is also open-sourced to bootstrap further study.
The code for our system, to bootstrap further study.
67968
clarity
Write a clarified version of the sentence: Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost.
Our experiments demonstrate that (i) SOLOIST creates new state-of-the-art results on two well-known benchmarks, CamRest and MultiWOZ, (ii) in the few-shot learning setting, the dialog systems developed by SOLOIST significantly outperform those developed by existing methods, and (iii) the use of machine teaching substantially reduces the labeling cost.
67969
clarity
Rewrite the sentence more clearly: ROUGE is the de facto criterion for summarization research. However, its two major drawbackslimit the research and application of automated summarization systems.
Canonical automatic summary evaluation metrics, such as ROUGE, suffer from two drawbacks.
67970
clarity
Write a clearer version for the sentence: First, ROUGE favors lexical similarity instead of semantic similarity, making it especially unfit for abstractive summarization.
First, semantic similarity and linguistic quality are not captured well.
67971
clarity
Make this sentence readable: Therefore, we introduce a new end-to-end metric system for summary quality assessment by leveraging the semantic similarities of words and/or sentences in deep learning.
Therefore, we introduce an end-to-end metric system for summary quality assessment by leveraging the semantic similarities of words and/or sentences in deep learning.
67972
clarity
Improve this sentence for readability: Therefore, we introduce a new end-to-end metric system for summary quality assessment by leveraging the semantic similarities of words and/or sentences in deep learning.
Therefore, we introduce a new end-to-end approach for summary quality assessment by leveraging the semantic similarities of words and/or sentences in deep learning.
67973
clarity
Improve this sentence for readability: The proposed approach exhibits very promising results on gold-standard datasets and suggests its great potential to future summarization research.
The proposed approach exhibits promising results on gold-standard datasets and suggests its great potential to future summarization research.
67974
clarity
Rewrite this sentence clearly: Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in biomedical text-mining tasks and enabled us to extract valuable information from biomedical literature.
Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature.
67975
clarity
Clarify the sentence: Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in biomedical text-mining tasks and enabled us to extract valuable information from biomedical literature. However, we benefitted only in English because of the significant scarcity of high-quality medical documents, such as PubMed, in each language.
Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in biomedical text-mining tasks and enabled us to extract valuable information from biomedical literature; however, only English speakers benefit due to the significant scarcity of high-quality medical documents, such as PubMed, in each language.
67976
clarity
Rewrite this sentence for clarity: Therefore, we propose a method that realizes a high-performance BERT model by using a small corpus.
Therefore, we propose a method to train a high-performance BERT model by using a small corpus.
67977
clarity
Make this sentence more readable: We introduce the method to train a BERT model on a small medical corpus both in English and Japanese, respectively, and then we evaluate each of them in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical-document-classification task in Japanese, respectively.
We introduce the method to train a BERT model on a small medical corpus both in English and in Japanese, and we present the evaluation of each model in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical-document-classification task in Japanese, respectively.
67978
clarity
Clarify this paragraph: We introduce the method to train a BERT model on a small medical corpus both in English and Japanese, respectively, and then we evaluate each of them in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical-document-classification task in Japanese, respectively.
We introduce the method to train a BERT model on a small medical corpus both in English and Japanese, respectively, and then we evaluate each of them in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical document classification task in Japanese, respectively.
67979
clarity
Write a better readable version of the sentence: After confirming their satisfactory performances, we apply our method to develop a model that outperforms the pre-existing models.
After confirming their satisfactory performances, we apply our method to develop a model comparable to the publicly available models. OuBioBERT, short for
67980
clarity
Make this sentence better readable: Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University (ouBioBERT) achieves the best scores on 7 of the 10 datasets in terms of the BLUE benchmark.
Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University, achieved the best score in terms of the BLUE benchmark.
67981
clarity
Make the text more understandable: Bidirectional Encoder Representations from Transformers (BERT)models for medical specialties, such as BioBERT and clinicalBERT, have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature ;
Bidirectional Encoder Representations from Transformers (BERT)models for medical specialties, such as bidirectional encoder representations from transformers (BERT), the performance of information extraction from a free text by NLP has significantly improved for both the general domain and medical domain ;
67982
clarity
Write a better readable version of the sentence: OuBioBERT, short for Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University, achieved the best score in terms of the BLUE benchmark. The total score is 1.1 points above that of BioBERT and 0.3 points above that of the ablated model trained without our proposed method.
OuBioBERT, short for Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University, achieved the best score in terms of the BLUE benchmark were 0.3 points above that of the ablated model trained without our proposed method.
67983
clarity
Clarify the sentence: This proposed technique is an effective approach to develop localized medical BERT models and to enhance domain-specific models in the biomedical domain.
Well-balanced pre-training by up-sampling instances derived from a corpus appropriate for the target task allows us to construct a high-performance BERT model.
67984
clarity
Write a readable version of the sentence: Word embeddings represent words in a numeric space in such a way that semantic relations between words are encoded as distances and directions in the vector space.
Word embeddings represent words in a numeric space so that semantic relations between words are encoded as distances and directions in the vector space.
67985
clarity
Clarification: Word embeddings represent words in a numeric space in such a way that semantic relations between words are encoded as distances and directions in the vector space.
Word embeddings represent words in a numeric space in such a way that semantic relations between words are represented as distances and directions in the vector space.
67986
clarity
Rewrite this sentence clearly: Cross-lingual word embeddings map words from one language to the vector space of another language, or words from multiple languages to the same vector space where similar words are aligned.
Cross-lingual word embeddings transform vector spaces of different languages so that similar words are aligned.
67987
clarity
Make this easier to read: Cross-lingual embeddings can be used to transfer machine learning models between languages and thereby compensate for insufficient data in less-resourced languages.
Cross-lingual embeddings can be used to transfer machine learning models between languages, thereby compensating for insufficient data in less-resourced languages.
67988
clarity
Rewrite this sentence for clarity: We evaluate systems on two benchmark datasets.
We experiment on two benchmark datasets.
67989
clarity
Make this sentence readable: These evaluation metricsare used to determine a stable system. Only robust systems in all evaluation metrics are suitable for use in real applications.
It is difficult to select the best system using only these automatic metrics, but it is possible to select stable systems. We consider only robust systems in all evaluation metrics are suitable for use in real applications.
67990
clarity
Use clearer wording: Only robust systems in all evaluation metrics are suitable for use in real applications.
Only robust systems in all automatic evaluation metrics to be the minimum conditions that can be used in real applications.
67991
clarity
Rewrite this sentence for clarity: Many previous systems are difficult to use in certain situations because they are unstable in some evaluation metrics.
Many previous systems are difficult to use in certain situations because performance is significantly lower in several evaluation metrics.
67992
clarity
Make the sentence clear: In this paper, we design a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive Annotation's foundational layer (Abend and Rappoport, 2013).
This paper proposes a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive Annotation's foundational layer (Abend and Rappoport, 2013).
67993
clarity
Change to clearer wording: In this paper, we design a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive Annotation's foundational layer (Abend and Rappoport, 2013). Our design aligns with O'Gorman (2019)'s implicit role interpretation in a linguistic and computational model.
In this paper, we design a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive Annotation's foundational layer.
67994
clarity
Rewrite this sentence clearly: We corroborate the theory by reviewing and refining part of the UCCA EWT corpus and providing a new dataset alongside comparative analysis with other schemes.
We exemplify our design by revisiting part of the UCCA EWT corpus and providing a new dataset alongside comparative analysis with other schemes.
67995
clarity
Rewrite the sentence more clearly: We corroborate the theory by reviewing and refining part of the UCCA EWT corpus and providing a new dataset alongside comparative analysis with other schemes. It is anticipated that our study will inspire tailored design of implicit role annotation in other meaning representation frameworks, and stimulate research in relevant fields, such as coreference resolution and question answering.
We corroborate the theory by reviewing and refining part of the UCCA EWT corpus and providing a new dataset alongside comparative analysis with other schemes.
67996
clarity
Clarify the sentence: The benefits of MaChAmp are its flexible configuration options, and the support of a variety of NLP tasks in a uniform toolkit, from text classification to sequence labeling and dependency parsing.
The benefits of MaChAmp are its flexible configuration options, and the support of a variety of natural language processing tasks in a uniform toolkit, from text classification to sequence labeling and dependency parsing.
67997
clarity
Clarify: One account of the informativity effect on duration is that the acoustic details of word reduction are stored as part of a word's representation.
One account of the informativity effect on duration is that the acoustic details of word reduction are stored as part of a word's mental representation.
67998
clarity
Make the sentence clear: Extending this interpretation, these results suggest that informativity is closely linked to prosodic prominence, and that lexical representation of a word includes phonetic details associated with its prosodic prominence.
Extending this interpretation, these results suggest that predictability is closely linked to prosodic prominence, and that lexical representation of a word includes phonetic details associated with its prosodic prominence.
67999
clarity
Make this sentence more readable: We present ExplainIt, a review summarization system centered around opinion explainability: the simple notion of high-level opinions (e.g. "noisy room") being explainable by lower-level ones (e.g., "loud fridge"). ExplainIt utilizes a combination of supervised and unsupervised components to mine the opinion phrasesfrom reviews URLanize them in an Opinion Causality Graph (OCG), a novel semi-structured representation which summarizes causal relations. To construct an OCG, we cluster semantically similar opinions in single nodes, thus canonicalizing opinion paraphrases, and draw directed edges between node pairs that are likely connected by a causal relation.
We present ExplainIt, a system that extracts URLanizes opinions into an opinion graph, which are useful for downstream applications such as generating explainable review summaries and facilitating search over opinion phrases. In such graphs, a node represents a set of semantically similar opinions in single nodes, thus canonicalizing opinion paraphrases, and draw directed edges between node pairs that are likely connected by a causal relation.
68000
clarity
Rewrite the sentence more clearly: For each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture.
For each of them, such as approaches to mention encoding based on the self-attention architecture.