before_sent
stringlengths
13
1.44k
before_sent_with_intent
stringlengths
25
1.45k
after_sent
stringlengths
0
1.41k
labels
stringclasses
6 values
doc_id
stringlengths
4
10
revision_depth
int64
1
4
In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems . We distill their generic architecture that includes candidate generation , entity ranking , and unlinkable mention prediction components. For each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture.
<meaning-changed> In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems . We distill their generic architecture that includes candidate generation , entity ranking , and unlinkable mention prediction components. For each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture.
In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems developed since 2015 as a result of the "deep learning revolution" in NLP. Our goal is to systemize design features of neural entity linking systems and compare their performances to the best classic methods on the common benchmarks. We distill generic architectural components of a neural EL system, like candidate generation and entity ranking summarizing the prominent methods for each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture.
meaning-changed
2006.00575
1
For each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture.
<clarity> For each of them, we summarize the prominent methods and models, including approaches to mention encoding based on the self-attention architecture.
For each of them, such as approaches to mention encoding based on the self-attention architecture.
clarity
2006.00575
1
Since many EL models take advantage of entity embeddings to improve their generalization capabilities, we provide an overview of the widely-used entity embedding techniques. We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
<clarity> Since many EL models take advantage of entity embeddings to improve their generalization capabilities, we provide an overview of the widely-used entity embedding techniques. We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
The vast variety of modifications of this general neural entity linking architecture are grouped by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
clarity
2006.00575
1
We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
<clarity> We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
We group the variety of EL approaches by several common themes : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
clarity
2006.00575
1
We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
<clarity> We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global EL , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
We group the variety of EL approaches by several common research directions : joint entity recognition and linking, models for global linking , domain-independent techniques including zero-shot and distant supervision methods, and cross-lingual approaches.
clarity
2006.00575
1
We also discuss the novel application of EL for enhancing word representation models like BERT. We systemize the critical design features of EL systems and provide their reported evaluation results .
<meaning-changed> We also discuss the novel application of EL for enhancing word representation models like BERT. We systemize the critical design features of EL systems and provide their reported evaluation results .
Since many neural models take advantage of pre-trained entity embeddings to improve their generalization capabilities, we provide an overview of popular entity embedding techniques. Finally, we briefly discuss applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models such as BERT .
meaning-changed
2006.00575
1
CoAID includes 1,896 news, 183,564 related user engagements, 516 social platform posts about COVID-19, and ground truth labels.
<meaning-changed> CoAID includes 1,896 news, 183,564 related user engagements, 516 social platform posts about COVID-19, and ground truth labels.
CoAID includes 3,235 news, 294,692 related user engagements, 516 social platform posts about COVID-19, and ground truth labels.
meaning-changed
2006.00885
1
CoAID includes 1,896 news, 183,564 related user engagements, 516 social platform posts about COVID-19, and ground truth labels.
<meaning-changed> CoAID includes 1,896 news, 183,564 related user engagements, 516 social platform posts about COVID-19, and ground truth labels.
CoAID includes 1,896 news, 183,564 related user engagements, 851 social platform posts about COVID-19, and ground truth labels.
meaning-changed
2006.00885
1
CoAID includes 3,235 news, 294,692 related user engagements, 851 social platform posts about COVID-19, and ground truth labels.
<meaning-changed> CoAID includes 3,235 news, 294,692 related user engagements, 851 social platform posts about COVID-19, and ground truth labels.
CoAID includes 4,251 news, 296,000 related user engagements, 851 social platform posts about COVID-19, and ground truth labels.
meaning-changed
2006.00885
2
CoAID includes 3,235 news, 294,692 related user engagements, 851 social platform posts about COVID-19, and ground truth labels.
<meaning-changed> CoAID includes 3,235 news, 294,692 related user engagements, 851 social platform posts about COVID-19, and ground truth labels.
CoAID includes 3,235 news, 294,692 related user engagements, 926 social platform posts about COVID-19, and ground truth labels.
meaning-changed
2006.00885
2
In this work, we point out the inability to infer behavioral conclusions from probing results, and offer an alternative method which is focused on how the information is being used, rather than on what information is encoded.
<style> In this work, we point out the inability to infer behavioral conclusions from probing results, and offer an alternative method which is focused on how the information is being used, rather than on what information is encoded.
In this work, we point out the inability to infer behavioral conclusions from probing results, and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded.
style
2006.00995
1
Equipped with this new analysis tool, we can now ask questions that were not possible before, e.g. is part-of-speech information important for word prediction?
<clarity> Equipped with this new analysis tool, we can now ask questions that were not possible before, e.g. is part-of-speech information important for word prediction?
Equipped with this new analysis tool, we can ask questions that were not possible before, e.g. is part-of-speech information important for word prediction?
clarity
2006.00995
1
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes.
<coherence> A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes.
A growing body of work makes use of probing to investigate the working of neural models, often considered black boxes.
coherence
2006.00995
2
In this work, we point out the inability to infer behavioral conclusions from probing results , and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded.
<fluency> In this work, we point out the inability to infer behavioral conclusions from probing results , and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded.
In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded.
fluency
2006.00995
2
In this work, we point out the inability to infer behavioral conclusions from probing results , and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded.
<clarity> In this work, we point out the inability to infer behavioral conclusions from probing results , and offer an alternative method which focuses on how the information is being used, rather than on what information is encoded.
In this work, we point out the inability to infer behavioral conclusions from probing results , and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
clarity
2006.00995
2
Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention which removes it from the representation.
<fluency> Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention which removes it from the representation.
Our method, Amnesic Probing, follows the intuition that the utility of a property for a given task can be assessed by measuring the influence of a causal intervention that removes it from the representation.
fluency
2006.00995
2
Artificial neural networks ( ANNS have shown much empirical success in solving perceptual tasks across various cognitive modalities.
<fluency> Artificial neural networks ( ANNS have shown much empirical success in solving perceptual tasks across various cognitive modalities.
Artificial neural networks ( ANNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
fluency
2006.01095
1
While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNS and neural populations in the brain.
<fluency> While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNS and neural populations in the brain.
While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain.
fluency
2006.01095
1
ANNS have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
<fluency> ANNS have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
fluency
2006.01095
1
Artificial neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
<meaning-changed> Artificial neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
Deep neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
meaning-changed
2006.01095
2
Artificial neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
<meaning-changed> Artificial neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
Artificial neural networks ( DNNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
meaning-changed
2006.01095
2
While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain.
<fluency> While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain.
While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representations extracted from task-optimized ANNs and neural populations in the brain.
fluency
2006.01095
2
While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain.
<meaning-changed> While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populations in the brain.
While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized DNNs and neural populations in the brain.
meaning-changed
2006.01095
2
ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
<meaning-changed> ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
DNNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
meaning-changed
2006.01095
2
ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
<fluency> ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
ANNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn , they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations.
fluency
2006.01095
2
In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience , to analyze the high dimensional geometry of language representations from large-scale contextual embedding models.
<meaning-changed> In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience , to analyze the high dimensional geometry of language representations from large-scale contextual embedding models.
In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience that connects geometry of feature representations with linear separability of classes , to analyze the high dimensional geometry of language representations from large-scale contextual embedding models.
meaning-changed
2006.01095
2
In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience , to analyze the high dimensional geometry of language representations from large-scale contextual embedding models.
<coherence> In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience , to analyze the high dimensional geometry of language representations from large-scale contextual embedding models.
In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience , to analyze language representations from large-scale contextual embedding models.
coherence
2006.01095
2
We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags).
<meaning-changed> We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags).
We explore representations from different model families (BERT, RoBERTa, GPT , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags).
meaning-changed
2006.01095
2
We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags).
<fluency> We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags).
We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifolds across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags).
fluency
2006.01095
2
We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags). We further observe that different encoding schemes used to obtain the representations lead to differences in whether these linguistic manifolds emerge in earlier or later layers of the network .
<meaning-changed> We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech and combinatory categorical grammar tags). We further observe that different encoding schemes used to obtain the representations lead to differences in whether these linguistic manifolds emerge in earlier or later layers of the network .
We explore representations from different model families (BERT, RoBERTa, GPT-2 , etc.) and find evidence for emergence of linguistic manifold across layer depth (e.g., manifolds for part-of-speech tags), especially in ambiguous data (i.e, words with multiple part-of-speech tags, or part-of-speech classes including many words) .
meaning-changed
2006.01095
2
In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds radius, dimensionality and inter-manifold correlations.
<fluency> In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds radius, dimensionality and inter-manifold correlations.
In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds ' radius, dimensionality and inter-manifold correlations.
fluency
2006.01095
2
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply these principles differently. This work introduces another component to this framework : Multi-Agent Cross-translated Diversification (MACD). The method trains multiple UMT agents and then translates monolingual data back and forth using non-duplicative agents to acquire synthetic parallel data for supervised MT. MACD is applicable to all previous UMT approaches.
<meaning-changed> Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply these principles differently. This work introduces another component to this framework : Multi-Agent Cross-translated Diversification (MACD). The method trains multiple UMT agents and then translates monolingual data back and forth using non-duplicative agents to acquire synthetic parallel data for supervised MT. MACD is applicable to all previous UMT approaches.
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches.
meaning-changed
2006.02163
1
In our experiments, the technique boosts the performance for some commonly used UMT methods by 1.5-2.0 BLEU.
<clarity> In our experiments, the technique boosts the performance for some commonly used UMT methods by 1.5-2.0 BLEU.
In our experiments, it boosts the performance for some commonly used UMT methods by 1.5-2.0 BLEU.
clarity
2006.02163
1
In our experiments, the technique boosts the performance for some commonly used UMT methods by 1.5-2.0 BLEU.
<clarity> In our experiments, the technique boosts the performance for some commonly used UMT methods by 1.5-2.0 BLEU.
In our experiments, the technique boosts the performance of the standard UMT methods by 1.5-2.0 BLEU.
clarity
2006.02163
1
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, MACD outperforms cross-lingual masked language model pretraining by 2.3, 2.2 and 1.6 BLEU, respectively.
<meaning-changed> In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, MACD outperforms cross-lingual masked language model pretraining by 2.3, 2.2 and 1.6 BLEU, respectively.
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, CBD outperforms cross-lingual masked language model pretraining by 2.3, 2.2 and 1.6 BLEU, respectively.
meaning-changed
2006.02163
1
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, MACD outperforms cross-lingual masked language model pretraining by 2.3, 2.2 and 1.6 BLEU, respectively.
<meaning-changed> In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, MACD outperforms cross-lingual masked language model pretraining by 2.3, 2.2 and 1.6 BLEU, respectively.
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian, MACD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU, respectively.
meaning-changed
2006.02163
1
It also yields 1.5 -3.3 BLEU improvements in IWSLT English-French and English-German translation tasks.
<meaning-changed> It also yields 1.5 -3.3 BLEU improvements in IWSLT English-French and English-German translation tasks.
It also yields 1.5 --3.3 BLEU improvements in IWSLT English-French and English-German translation tasks.
meaning-changed
2006.02163
1
It also yields 1.5 -3.3 BLEU improvements in IWSLT English-French and English-German translation tasks.
<clarity> It also yields 1.5 -3.3 BLEU improvements in IWSLT English-French and English-German translation tasks.
It also yields 1.5 -3.3 BLEU improvements in IWSLT English-French and English-German tasks.
clarity
2006.02163
1
Through extensive experimental analyses, we show that MACD is effective because it embraces data diversity while other similar variants do not.
<meaning-changed> Through extensive experimental analyses, we show that MACD is effective because it embraces data diversity while other similar variants do not.
Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
meaning-changed
2006.02163
1
In our experiments, it boosts the performance of the standard UMT methods by 1.5-2.0 BLEU. In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
<meaning-changed> In our experiments, it boosts the performance of the standard UMT methods by 1.5-2.0 BLEU. In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
In our experiments, CBD achieves the state of the art in the WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
meaning-changed
2006.02163
2
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
<fluency> In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
In particular, in WMT'14 English-French, WMT'16 English-German and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
fluency
2006.02163
2
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
<meaning-changed> In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian , CBD outperforms cross-lingual masked language model (XLM) by 2.3, 2.2 and 1.6 BLEU , respectively.
In particular, in WMT'14 English-French, WMT'16 German-English and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively.
meaning-changed
2006.02163
2
The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model.
<clarity> The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model.
The target-side side monolingual data has been used in the back-translation - has been shown in many studies to affect the performance of the final NMT model.
clarity
2006.02876
1
The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model .
<meaning-changed> The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model .
The quality of the backward system - which is trained on the available parallel data and used for the back-translation approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic .
meaning-changed
2006.02876
1
This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique.
<meaning-changed> This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique.
This work proposes the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data to improve the model itself through the forward translation technique.
meaning-changed
2006.02876
1
This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
<clarity> This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
This work proposes a self-training strategy where the output of the backward model is used to improve the backward model through forward translation and the forward model through back-translation. Experimental results on English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
clarity
2006.02876
1
The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
<clarity> The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
The technique was shown to improve baseline low resource IWSLT'14 English-German and English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
clarity
2006.02876
1
The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU .
<meaning-changed> The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU .
The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation by 2.7 BLEU .
meaning-changed
2006.02876
1
The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU .
<clarity> The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU .
The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation method .
clarity
2006.02876
1
The target-side side monolingual data has been used in the back-translation approach to improve the forward (target) translation model.
<meaning-changed> The target-side side monolingual data has been used in the back-translation approach to improve the forward (target) translation model.
The quality of the backward system - which is trained on the available parallel data and used for the back-translation approach to improve the forward (target) translation model.
meaning-changed
2006.02876
2
The target-side side monolingual data has been used in the back-translation approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic .
<coherence> The target-side side monolingual data has been used in the back-translation approach to improve the forward (target) translation model. Whereas the success of the approach heavily relies on the additional parallel data generating model -- the backward model -- the aim of the approach is only targeted at improving the forward model. The back-translation approach was designed primarily to benefit from an additional data whose source-side is synthetic. But research works have shown that translation models can also benefit from additional data whose target-side is synthetic .
The target-side side monolingual data has been used in the back-translation - has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model .
coherence
2006.02876
2
This work proposes the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data to improve the backward model through forward translation and the forward model through back-translation.
<clarity> This work proposes the use of the target-side data throughout the back-translation approach to improve both the backward and forward models. We explored using only the target-side monolingual data to improve the backward model through forward translation and the forward model through back-translation.
This work proposes a self-training strategy where the output of the backward model is used to improve the backward model through forward translation and the forward model through back-translation.
clarity
2006.02876
2
We explored using only the target-side monolingual data to improve the backward model through forward translation and the forward model through back-translation. Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
<meaning-changed> We explored using only the target-side monolingual data to improve the backward model through forward translation and the forward model through back-translation. Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
We explored using only the target-side monolingual data to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14 English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
meaning-changed
2006.02876
2
Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
<meaning-changed> Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
Experimental results on English-German and IWSLT'15 English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
meaning-changed
2006.02876
2
Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
<meaning-changed> Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
Experimental results on English-German and English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation method .
meaning-changed
2006.02876
2
Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
<meaning-changed> Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation method .
Experimental results on English-German and English-Vietnamese low resource neural machine translation showed that the proposed approach outperforms baselines that use the traditional back-translation by 2.7 BLEU .
meaning-changed
2006.02876
2
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be seen sub-optimal.
<fluency> Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be seen sub-optimal.
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications where sentiment analysis might be seen sub-optimal.
fluency
2006.03644
1
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be seen sub-optimal.
<clarity> Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be seen sub-optimal.
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be sub-optimal.
clarity
2006.03644
1
This paper surveys the work on stance detection and situates its usage withincurrent opinion mining techniques in social media.
<fluency> This paper surveys the work on stance detection and situates its usage withincurrent opinion mining techniques in social media.
This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media.
fluency
2006.03644
1
An exhaustive review of stance detection techniques on social media ispresented ,including the task definition, the different types of targets in stance detection, the features set used, and the variousmachine learning approaches applied.
<fluency> An exhaustive review of stance detection techniques on social media ispresented ,including the task definition, the different types of targets in stance detection, the features set used, and the variousmachine learning approaches applied.
An exhaustive review of stance detection techniques on social media is presented ,including the task definition, the different types of targets in stance detection, the features set used, and the variousmachine learning approaches applied.
fluency
2006.03644
1
An exhaustive review of stance detection techniques on social media ispresented ,including the task definition, the different types of targets in stance detection, the features set used, and the variousmachine learning approaches applied.
<fluency> An exhaustive review of stance detection techniques on social media ispresented ,including the task definition, the different types of targets in stance detection, the features set used, and the variousmachine learning approaches applied.
An exhaustive review of stance detection techniques on social media ispresented ,including the task definition, the different types of targets in stance detection, the features set used, and the various machine learning approaches applied.
fluency
2006.03644
1
The survey reports the state-of-the-art results on the existing benchmark datasets onstance detection, and discusses the most effective approaches.
<fluency> The survey reports the state-of-the-art results on the existing benchmark datasets onstance detection, and discusses the most effective approaches.
The survey reports the state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches.
fluency
2006.03644
1
The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
<fluency> The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
The study concludes by providing discussion of the gaps in the current existing research and highlighting the possible futuredirections for stance detection on social media
fluency
2006.03644
1
The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
<fluency> The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible future directions for stance detection on social media
fluency
2006.03644
1
The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
<fluency> The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media
The study concludes by providing discussion of the gabs in the current existing research and highlighting the possible futuredirections for stance detection on social media .
fluency
2006.03644
1
In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques.
<others> In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques.
In this paper wepropose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques.
others
2006.03654
1
In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques.
<fluency> In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques.
In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with dis-entangled attention) that improves the BERT and RoBERTa models using two novel techniques.
fluency
2006.03654
1
Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining .
<meaning-changed> Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining .
Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens for model pretraining .
meaning-changed
2006.03654
1
Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining .
<clarity> Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining .
Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens in model pre-training .
clarity
2006.03654
1
We show that these two techniques significantly improve the efficiency of model pre-training and performance of downstream tasks.
<meaning-changed> We show that these two techniques significantly improve the efficiency of model pre-training and performance of downstream tasks.
We show that these two techniques significantly improve the efficiency of model pre-training and the performance of both natural languageunderstand (NLU) and natural langauge generation (NLG) tasks.
meaning-changed
2006.03654
1
The DeBERTa code and pre-trained models will be made publicly available at URL
<meaning-changed> The DeBERTa code and pre-trained models will be made publicly available at URL
Notably, we scale up DeBERTa to 1.5 billion parameters and it substantially outperforms Google's T5 with 11 billionparameters on the SuperGLUE benchmark (Wang et al., 2019a) and, for the first time, surpasses the human performance (89.9 vs. 89.8).
meaning-changed
2006.03654
1
Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
<clarity> Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
Recent VQA models may tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
clarity
2006.04315
1
Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
<clarity> Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
Visual Question Answering (VQA ) models tend to rely on language bias as a shortcut and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
clarity
2006.04315
1
Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
<meaning-changed> Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA .
Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to sufficiently learn the multi-modal knowledge from both vision and language .
meaning-changed
2006.04315
1
In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
<meaning-changed> In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
In this paper, we investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framework, which enables us to capture the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
meaning-changed
2006.04315
1
In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
<clarity> In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
In this paper, we propose a novel cause-effect look at the language bias as the direct effect of question on answer from the view of causal inference.
clarity
2006.04315
1
In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
<meaning-changed> In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference.
In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct causal effect of questions on answers and reduce the language bias by subtracting the direct language effect from the total causal effect of question on answer from the view of causal inference.
meaning-changed
2006.04315
1
In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference. The effectcan be captured by counterfactual VQA, where the image had not existed in an imagined scenario. Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
<coherence> In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of question on answer from the view of causal inference. The effectcan be captured by counterfactual VQA, where the image had not existed in an imagined scenario. Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect . Experiments demonstrate that our proposed counterfactual inference framework 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
coherence
2006.04315
1
Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
<meaning-changed> Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
Our proposed cause-effect look 1) is general to various VQA backbones and fusion strategies , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
meaning-changed
2006.04315
1
Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
<clarity> Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
clarity
2006.04315
1
Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
<meaning-changed> Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset , and 3) fills the theoretical gap in recent language prior based works .
Our proposed cause-effect look 1) is general to any baseline VQA architecture , 2) achieves significant improvement on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset .
meaning-changed
2006.04315
1
Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language.
<clarity> Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language.
VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language.
clarity
2006.04315
2
In this paper, we investigate how to capture and mitigate language bias in VQA.
<meaning-changed> In this paper, we investigate how to capture and mitigate language bias in VQA.
Recent debiasing methods proposed to exclude the language prior during inference. However, they fail to disentangle the "good" language context and "bad" language bias from the whole. In this paper, we investigate how to capture and mitigate language bias in VQA.
meaning-changed
2006.04315
2
In this paper, we investigate how to capture and mitigate language bias in VQA.
<coherence> In this paper, we investigate how to capture and mitigate language bias in VQA.
In this paper, we investigate how to mitigate language bias in VQA.
coherence
2006.04315
2
Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies, 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset .
<meaning-changed> Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies, 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset .
Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies, 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset without any augmented data. The code is available at URL
meaning-changed
2006.04315
2
In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO ;
<others> In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO ;
In o gur work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO ;
others
2006.06814
1
In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO ;
<meaning-changed> In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO ;
In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO , where the latent dialogue act is applied to avoid designing specific dialogue act representations ;
meaning-changed
2006.06814
1
(2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
<coherence> (2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
(2) train HDNO via hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
coherence
2006.06814
1
(2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
<style> (2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
(2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest the asynchronous updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
style
2006.06814
1
(2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
<clarity> (2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during HRL inspired by fictitious play, to preserve the comprehensibility of generated system utterances while improving fulfilling user requests ;
(2) train HDNO with hierarchical reinforcement learning (HRL), as well as suggest alternating updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer ;
clarity
2006.06814
1
We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing a significant improvement on the total performance evaluated with automatic metrics .
<meaning-changed> We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing a significant improvement on the total performance evaluated with automatic metrics .
We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the ability of explanation .
meaning-changed
2006.06814
1
In o gur work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations;
<fluency> In o gur work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations;
In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations;
fluency
2006.06814
2
Finally, we demonstrate the semantic meanings of latent dialogue acts to show the ability of explanation .
<clarity> Finally, we demonstrate the semantic meanings of latent dialogue acts to show the ability of explanation .
Finally, we demonstrate the semantic meanings of latent dialogue acts to show the explanability for HDNO .
clarity
2006.06814
2
We present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks.
<meaning-changed> We present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks.
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks.
meaning-changed
2006.10598
1
We present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks. Our approach is based on the observation that many neural networks are severely overparameterized, resulting in significant waste in computational resources as well as being susceptible to overfitting. SSNs address this by learning where and how to share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting.
<meaning-changed> We present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks. Our approach is based on the observation that many neural networks are severely overparameterized, resulting in significant waste in computational resources as well as being susceptible to overfitting. SSNs address this by learning where and how to share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting.
We present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of parameters. In SSNs each layer obtains weights from a parameter store that decides where and how to share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting.
meaning-changed
2006.10598
1
SSNs address this by learning where and how to share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting. Specifically, we automatically construct parameter groups that identify where parameter sharing is most beneficial. Then, we map each group's weights to construct layerswith learned combinations of candidates from a shared parameter pool. SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
<clarity> SSNs address this by learning where and how to share parameters between layers in a neural network while avoiding degenerate solutions that result in underfitting. Specifically, we automatically construct parameter groups that identify where parameter sharing is most beneficial. Then, we map each group's weights to construct layerswith learned combinations of candidates from a shared parameter pool. SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
SSNs address this by learning where and how to allocate parameters to layers. This can result in sharing parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
clarity
2006.10598
1
SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
<fluency> SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
SSNs can share parameters across layers even when they have different sizes or perform different operations , and/or operate on features from different modalities .
fluency
2006.10598
1
SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
<meaning-changed> SSNs can share parameters across layers even when they have different sizes , perform different operations , and/or operate on features from different modalities .
SSNs can share parameters across layers even when they have different sizes , perform different operations . SSNs do not require any modifications to a model's loss function or architecture, making them easy to use. Our approach can create parameter efficient networks by using a relatively small number of weights, or can improve a model's performance by adding additional model capacity during training without affecting the computational resources required at test time .
meaning-changed
2006.10598
1
We evaluate our approach on a diverse set of tasks , including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters .
<meaning-changed> We evaluate our approach on a diverse set of tasks , including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters .
We evaluate SSNs using seven network architectures across diverse tasks that include image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters .
meaning-changed
2006.10598
1
We evaluate our approach on a diverse set of tasks , including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters . We also apply SSNs to knowledge distillation, where we obtain state-of-the-art results when combined with traditional distillation methods .
<meaning-changed> We evaluate our approach on a diverse set of tasks , including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters . We also apply SSNs to knowledge distillation, where we obtain state-of-the-art results when combined with traditional distillation methods .
We evaluate our approach on a diverse set of tasks , including image classification, bidirectional image-sentence retrieval, and phrase grounding, creating high performing models even when using as little as 1\% of the parameters .
meaning-changed
2006.10598
1