before_sent
stringlengths
13
1.44k
before_sent_with_intent
stringlengths
25
1.45k
after_sent
stringlengths
0
1.41k
labels
stringclasses
6 values
doc_id
stringlengths
4
10
revision_depth
int64
1
4
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
<clarity> We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models . We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
clarity
2004.14601
1
We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
<clarity> We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We train LSTMs on non-linguistic data and evaluate their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
clarity
2004.14601
1
We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
<clarity> We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We train LSTMs on non-linguistic , structured data and test their performance on natural language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
clarity
2004.14601
1
We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
<clarity> We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language.
We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language.
clarity
2004.14601
1
We find that models trained on structured data such as music and Java codehave internal representations that help in modelling human language, and that, surprisingly, adding minimal amounts of structure to the training data makes a large difference in transfer to natural language . Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
<meaning-changed> We find that models trained on structured data such as music and Java codehave internal representations that help in modelling human language, and that, surprisingly, adding minimal amounts of structure to the training data makes a large difference in transfer to natural language . Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
meaning-changed
2004.14601
1
Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
<clarity> Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
Further experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
clarity
2004.14601
1
Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
<meaning-changed> Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap.
Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, even after removing any vocabulary overlap.
meaning-changed
2004.14601
1
Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap. This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies .
<clarity> Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap. This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies .
Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, suggesting that representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies .
clarity
2004.14601
1
This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies .
<style> This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies .
This suggests that the internal representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
style
2004.14601
1
Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
<clarity> Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kinds of structural biases that give learners the ability to model language.
clarity
2004.14601
1
Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
<clarity> Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
Our results provide insights into how neural networks represent linguistic structure, and also about the kind of structural inductive biases which a learner needs to model language.
clarity
2004.14601
1
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
<clarity> Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
clarity
2004.14601
2
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
<meaning-changed> Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training a model on either of these artificial languages leads to the same substantial gains when testing on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
meaning-changed
2004.14601
2
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
<coherence> Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do.
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on natural language as well as recursive languages do.
coherence
2004.14601
2
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
<clarity> Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language . Further experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
clarity
2004.14601
2
Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
<meaning-changed> Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
Experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
meaning-changed
2004.14601
2
Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
<meaning-changed> Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced by pre-training correspond to the cross-linguistic syntactic properties studied in linguistic typology .
meaning-changed
2004.14601
2
Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
<clarity> Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology .
Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties .
clarity
2004.14601
2
Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which a learner needs to model language .
<style> Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which a learner needs to model language .
Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which allow for natural language acquisition .
style
2004.14601
2
In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure. Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about entailment and negation.
<meaning-changed> In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure. Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about entailment and negation.
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation.
meaning-changed
2004.14623
2
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
<meaning-changed> We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
meaning-changed
2004.14623
2
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
<clarity> We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our intervention experiments bolster this, showing that the causal dynamics of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
clarity
2004.14623
2
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
<meaning-changed> We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment relations .
meaning-changed
2004.14623
2
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
<meaning-changed> We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment and negation at an algorithmic level .
meaning-changed
2004.14623
2
We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
<clarity> We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
clarity
2004.14974
1
For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
<style> For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
For this task, we construct SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
style
2004.14974
1
For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
<coherence> For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales.
For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales.
coherence
2004.14974
1
We present a baseline model and assess its performance on SciFact. We observe that, while fact-checking models trained on Wikipedia articles or political news have difficulty generalizing to our task, simple domain adaptation techniques represent a promising avenue for improvement. Finally, we provide initial results showing how our model can be used to verify claims relevant to COVID-19 on the CORD-19 corpus.
<meaning-changed> We present a baseline model and assess its performance on SciFact. We observe that, while fact-checking models trained on Wikipedia articles or political news have difficulty generalizing to our task, simple domain adaptation techniques represent a promising avenue for improvement. Finally, we provide initial results showing how our model can be used to verify claims relevant to COVID-19 on the CORD-19 corpus.
We develop baseline models for SciFact, and demonstrate that these models benefit from combined training on a large dataset of claims about Wikipedia articles, together with the new SciFact data. We show that our claim verification system is able to identify plausible evidence for 23 / 36 claims relevant to COVID-19 on the CORD-19 corpus.
meaning-changed
2004.14974
1
Our dataset will be made publicly available at URL
<clarity> Our dataset will be made publicly available at URL
Our results and experiments strongly suggest that our new task and data will support significant future research efforts.
clarity
2004.14974
1
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment.
<clarity> One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment.
One key principle for assessing textual similarity is measuring the degree of semantic overlap of them by considering word-by-word alignment.
clarity
2004.15003
1
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
<clarity> One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
clarity
2004.15003
1
However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
<meaning-changed> However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance.
However, alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the generic sentence vectorsin terms of performance.
meaning-changed
2004.15003
1
However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish word importance and word meaning.
<meaning-changed> However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish word importance and word meaning.
However, alignment-based approaches are inferior to the simple cosine similarity between general-purpose sentence vectors. To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches do not distinguish word importance and word meaning.
meaning-changed
2004.15003
1
We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish word importance and word meaning. To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
<meaning-changed> We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish word importance and word meaning. To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish the norm and direction, whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
meaning-changed
2004.15003
1
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
<clarity> To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
To solve this , we propose to decouple word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
clarity
2004.15003
1
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
<clarity> To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction then computing the alignment-based similarity with the help of earth mover's distance .
clarity
2004.15003
1
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
<clarity> To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance .
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity using earth mover's distance .
clarity
2004.15003
1
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance . We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere.
<meaning-changed> To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance . We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere.
To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance (optimal transport cost), which we refer to as word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere.
meaning-changed
2004.15003
1
We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark .
<clarity> We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark .
We call the method word rotator's distance . Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter); this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method. On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines .
clarity
2004.15003
1
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
<clarity> One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
clarity
2004.15003
2
Such alignment-based approaches are both intuitive and interpretable;
<clarity> Such alignment-based approaches are both intuitive and interpretable;
Such alignment-based approaches are intuitive and interpretable;
clarity
2004.15003
2
To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
<clarity> To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
To address this issue , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
clarity
2004.15003
2
To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
<clarity> To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
To remedy this , we focus on and demonstrate the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
clarity
2004.15003
2
To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
<clarity> To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity.
To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity.
clarity
2004.15003
2
Alignment-based approaches do not distinguish the norm and direction , whereas sentence-vector approaches automatically use the norm as the word importance.
<clarity> Alignment-based approaches do not distinguish the norm and direction , whereas sentence-vector approaches automatically use the norm as the word importance.
Alignment-based approaches do not distinguish them , whereas sentence-vector approaches automatically use the norm as the word importance.
clarity
2004.15003
2
Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
<clarity> Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
Accordingly, we propose a method that first decouples word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
clarity
2004.15003
2
Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
<clarity> Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
Accordingly, we propose to decouple word vectors into their norm and direction , and then computes alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
clarity
2004.15003
2
Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
<coherence> Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance.
Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( i.e., optimal transport cost), which we refer to as word rotator's distance.
coherence
2004.15003
2
Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ;
<coherence> Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ;
Besides, we find how to grow the norm and direction of word vectors (vector converter) ;
coherence
2004.15003
2
Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ; this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
<coherence> Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ; this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) , which is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
coherence
2004.15003
2
this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
<fluency> this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
this is a new systematic approach derived from sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
fluency
2004.15003
2
this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
<clarity> this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method .
this is a new systematic approach derived from the sentence-vector estimation methods .
clarity
2004.15003
2
On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
<clarity> On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
On several textual similarity datasets, the combination of these simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
clarity
2004.15003
2
On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
<meaning-changed> On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines. The source code is available at URL
meaning-changed
2004.15003
2
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression requiring expert background knowledge and complex language understanding.
<fluency> We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression requiring expert background knowledge and complex language understanding.
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression , requiring expert background knowledge and complex language understanding.
fluency
2004.15011
1
We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related tasks of extreme summarization and title generation, which outperforms strong extractive and abstractive summarization baselines.
<clarity> We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related tasks of extreme summarization and title generation, which outperforms strong extractive and abstractive summarization baselines.
We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines.
clarity
2004.15011
1
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
<clarity> We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
We introduce TLDR generation , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
clarity
2004.15011
2
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
<clarity> We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
We introduce TLDR generation for scientific papers , a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression , requiring expert background knowledge and complex language understanding .
clarity
2004.15011
2
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
<fluency> We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression and requires expert background knowledge and complex language understanding .
fluency
2004.15011
2
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
<meaning-changed> We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding .
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and understanding of complex domain-specific language .
meaning-changed
2004.15011
2
To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
<clarity> To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
To facilitate study on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
clarity
2004.15011
2
To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
<meaning-changed> To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
To facilitate research on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments.
meaning-changed
2004.15011
2
To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines .
<meaning-changed> To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines .
To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at URL
meaning-changed
2004.15011
2
Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments .
<coherence> Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments .
Especially, widely used n-gram similarity metrics do not correlate with human judgments .
coherence
2005.00192
2
Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments .
<meaning-changed> Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments .
Using our human-evaluation datasets, we show that widely used n-gram similarity metrics often fail to discriminate the incorrect answers since they equally consider all of the tokens .
meaning-changed
2005.00192
2
To alleviate this problem, we propose a new metric for evaluating the correctness of GenQA.
<meaning-changed> To alleviate this problem, we propose a new metric for evaluating the correctness of GenQA.
To alleviate this problem, we propose KPQA-metric, a new metric for evaluating the correctness of GenQA.
meaning-changed
2005.00192
2
Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets.
<meaning-changed> Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets.
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets. Using our human-evaluation datasets, we show that our proposed metric has a significantly higher correlation with human judgments than existing metrics in various datasets.
meaning-changed
2005.00192
2
Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets.
<meaning-changed> Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets.
Our proposed metric shows a significantly higher correlation with human judgments than existing metrics . The code is available at URL
meaning-changed
2005.00192
2
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations.
<clarity> Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations.
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations.
clarity
2005.00782
1
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness .
<clarity> Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness .
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness .
clarity
2005.00782
1
Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness . In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings.
<meaning-changed> Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness . In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings.
Prior studies of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humans requires inferences based on implicit commonsense relationships, and robustness despite paraphrasing. In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work, we develop a systematic procedure to probe PTLMs in three task settings.
meaning-changed
2005.00782
1
Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness.
<clarity> Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness.
Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness.
clarity
2005.00782
1
We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
<clarity> We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
clarity
2005.00782
1
We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
<meaning-changed> We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
meaning-changed
2005.00782
1
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated.
<style> Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated.
Pre-trained language models ( PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated.
style
2005.00782
2
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated.
<clarity> Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated.
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to communicate with humans is fiercely debated.
clarity
2005.00782
2
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing .
<clarity> Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing .
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated .
clarity
2005.00782
2
In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations.
<meaning-changed> In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations.
In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA : Robust Inference capability based on Commonsense Axioms , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations.
meaning-changed
2005.00782
2
In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
<clarity> In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates robust commonsense inference despite textual perturbations. To generate data for this challenge , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
clarity
2005.00782
2
In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
<meaning-changed> In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
In our work , we develop a systematic and scalable procedure using commonsense knowledge bases and probe PTLMs across three different evaluation settings.
meaning-changed
2005.00782
2
In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
<meaning-changed> In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings.
In our work , we develop a systematic procedure to probe PTLMs across two different evaluation settings.
meaning-changed
2005.00782
2
Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks.
<meaning-changed> Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks.
Extensive experiments on our generated probe sets with more than 10k statements show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks.
meaning-changed
2005.00782
2
Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks.
<clarity> Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks.
Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing on the zero-shot setting , are heavily impacted by statistical biases, and are not robust to perturbation attacks.
clarity
2005.00782
2
Our framework and probe sets can help future work improve PTLMs ' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
<meaning-changed> Our framework and probe sets can help future work improve PTLMs ' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
We also find that fine-tuning on similar statements offer limited gains, as PTLMs still fail to generalize to unseen inferences. Our new large-scale benchmark exposes a significant gap between PTLMs and human-level language understanding and offers a new challenge for PTLMs to demonstrate commonsense .
meaning-changed
2005.00782
2
Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing physician burnout.
<clarity> Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing physician burnout.
Following each patient visit, physicians draft long semi-structured clinical summaries called SOAP notes. While invaluable to clinicians and researchers, creating digital SOAP notes is burdensome, contributing to physician burnout.
clarity
2005.01795
2
In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
<clarity> In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
In this paper, we introduce the first complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
clarity
2005.01795
2
In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
<style> In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
In this paper, we present the first study to evaluate complete pipelines to leverage deep summarization models to generate these notes from conversations between physicians and patients.
style
2005.01795
2
In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
<clarity> In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes from conversations between physicians and patients.
In this paper, we present the first study to evaluate complete pipelines to train summarization models to generate these notes based on transcripts of conversations between physicians and patients.
clarity
2005.01795
2
We benefit from a dataset that, along with transcripts and paired SOAP notes, consists of annotations marking noteworthy utterances that support each summary sentence. We decompose the problem into extractive and abstractive subtasks, exploring a spectrum of approaches according to how much they demand from each component.
<coherence> We benefit from a dataset that, along with transcripts and paired SOAP notes, consists of annotations marking noteworthy utterances that support each summary sentence. We decompose the problem into extractive and abstractive subtasks, exploring a spectrum of approaches according to how much they demand from each component.
After exploring a spectrum of approaches according to how much they demand from each component.
coherence
2005.01795
2
We decompose the problem into extractive and abstractive subtasks, exploring a spectrum of approaches according to how much they demand from each component. We observe that the performance improves constantly as the extractive subtask is made more complex - an observation that we also replicate on the well-known AMI meeting summarization dataset. Our best performing method first (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ;
<meaning-changed> We decompose the problem into extractive and abstractive subtasks, exploring a spectrum of approaches according to how much they demand from each component. We observe that the performance improves constantly as the extractive subtask is made more complex - an observation that we also replicate on the well-known AMI meeting summarization dataset. Our best performing method first (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ;
We decompose the problem into extractive and abstractive subtasks, exploring a spectrum of methods across the extractive-abstractive spectrum, we propose Cluster2Sent, an algorithm that (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ;
meaning-changed
2005.01795
2
Our best performing method first (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ;
<clarity> Our best performing method first (i) extracts noteworthy utterances via multi-label classification, assigning each to summary section(s) ;
Our best performing method first (i) extracts important utterances relevant to each summary section ;
clarity
2005.01795
2
(ii) clusters noteworthy utteranceson a per-section basis; and (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated.
<clarity> (ii) clusters noteworthy utteranceson a per-section basis; and (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated.
(ii) clusters together related utterances; and then (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated.
clarity
2005.01795
2
and (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated. Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points .
<meaning-changed> and (iii) generates the summary sentences by conditioning on the corresponding cluster and the subsection of the SOAP sentence to be generated. Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points .
and (iii) generates one summary sentence per cluster. Cluster2Sent outperforms its purely abstractive counterpart by 8 ROUGE-1 points .
meaning-changed
2005.01795
2
Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points .
<meaning-changed> Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points .
Compared to an end-to-end approach that generates the full SOAP note from the full conversation, our approach improves by around 8 ROUGE-1 points, and produces significantly more factual and coherent sentences as assessed by expert human evaluators. For reproducibility, we demonstrate similar benefits on the publicly available AMI dataset. Our results speak to the benefits of structuring summaries into sections and annotating supporting evidence when constructing summarization corpora .
meaning-changed
2005.01795
2
To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
<clarity> To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
clarity
2005.03954
1
To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
<fluency> To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for a pair of a recommendation seeker (user) and a recommender (bot).
To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), where there are multiple sequential dialogs for every pair of a recommendation seeker (user) and a recommender (bot).
fluency
2005.03954
1
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
<clarity> To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
The outbreak of COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
clarity
2005.03975
1
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
<meaning-changed> To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
To address the need for refined information in COVID-19 raises attention from the researchers from various communities. While many scientific articles have been published, a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current time-critical race to treat patients and to find a cure for the virus. To address the requests, we propose our CAiRE-COVID, a neural-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
meaning-changed
2005.03975
1
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
<clarity> To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses open-domain question answering (QA) techniques combined with summarization for mining the available scientific literature.
clarity
2005.03975
1
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
<meaning-changed> To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature.
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization techniques for mining the available scientific literature.
meaning-changed
2005.03975
1