before_sent
stringlengths
13
1.44k
before_sent_with_intent
stringlengths
25
1.45k
after_sent
stringlengths
0
1.41k
labels
stringclasses
6 values
doc_id
stringlengths
4
10
revision_depth
int64
1
4
analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world ,
<fluency> analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world ,
analyzes the classification coding system of attribute information , and the abstraction relation between attribute information and entities in the real world ,
fluency
2010.12789
1
analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world ,
<fluency> analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world ,
analyzes the classification coding system of attribute information and the abstraction relations between attribute information and entities in the real world ,
fluency
2010.12789
1
analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world , constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes.
<clarity> analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world , constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes.
analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world . To have a clear and better discussion, the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes.
clarity
2010.12789
1
constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes.
<meaning-changed> constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes.
constructs the storage model of information, and simulate the attribute information precessing process in one of the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes.
meaning-changed
2010.12789
1
constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes. Then, simulated the understanding process (the information processing process) on a dialogue example. Finally, the author summarizes the basic conditions of understanding and gives out the definition of understanding from a personal point of view. The study in this paper provides a practical, theoretical basis and research methods for NLU.It also can be applied in large-scale, multi-type information processing in the artificial intelligence (AI) area .
<clarity> constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes. Then, simulated the understanding process (the information processing process) on a dialogue example. Finally, the author summarizes the basic conditions of understanding and gives out the definition of understanding from a personal point of view. The study in this paper provides a practical, theoretical basis and research methods for NLU.It also can be applied in large-scale, multi-type information processing in the artificial intelligence (AI) area .
constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences .. .
clarity
2010.12789
1
First of all, please URLet all you knew about the lexical classification, then let's jump to the conclusion. This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks.
<meaning-changed> First of all, please URLet all you knew about the lexical classification, then let's jump to the conclusion. This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks.
We must recognize that natural language is a way of information encoding, and it encodes not only the information but also the procedures for how information is processed. To understand natural language, the same as we conceive and design computer languages, the first step is to separate information (or data) and the processing procedures of information (or data). In natural language, some processing procedures of data are encoded directly as the structure chunk and the pointer chunk (this paper has reclassified lexical chunks into data chunks, structure chunks, and pointer chunks.
meaning-changed
2010.12789
2
This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks. Almost all data chunks are information sets.
<meaning-changed> This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks. Almost all data chunks are information sets.
This paper reclassified lexical chunks as the data chunk, structure chunk, and the pointer chunk); some processing procedures of data imply in sentences structures; some requests of processing procedures are expressed by information senders and processed by information receivers. For the data parts, the classification encoding system of attribute information and the URLanization architecture (including constitutional structures of information sets.
meaning-changed
2010.12789
2
Almost all data chunks are information sets. According to the difference of the set structures, data chunks can be further divided into attribute chunks and entity chunks. According to the different abstraction level and method, attribute chunks can be further divided into basic attribute chunks, extended attribute chunks, and advanced attribute chunks. All of the above classification principles are structural and functionalbased discrimination, instead of artificially divide lexical chunks into a noun, adjective, pronouns, and so on. Now, let's back to the normal study process. The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information .
<meaning-changed> Almost all data chunks are information sets. According to the difference of the set structures, data chunks can be further divided into attribute chunks and entity chunks. According to the different abstraction level and method, attribute chunks can be further divided into basic attribute chunks, extended attribute chunks, and advanced attribute chunks. All of the above classification principles are structural and functionalbased discrimination, instead of artificially divide lexical chunks into a noun, adjective, pronouns, and so on. Now, let's back to the normal study process. The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information .
Almost all data chunks are information sets and the hierarchy between the information sets) were discussed. In section 2, the theoretical part elaborated in section 2 has been verified in examples and proofed that the studies in this paper have achieved the goal of enabling machines to understand the information .
meaning-changed
2010.12789
2
The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information . Therefore the study begins with disassembling the information represented by natural language and then discovered the classification coding system of attribute information, and the abstraction relations between attribute information and entities in the real world. To have a clear and better discussion , the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs.
<meaning-changed> The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information . Therefore the study begins with disassembling the information represented by natural language and then discovered the classification coding system of attribute information, and the abstraction relations between attribute information and entities in the real world. To have a clear and better discussion , the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs.
The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information conveyed in the dialogue. In section 4 , the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs.
meaning-changed
2010.12789
2
To have a clear and better discussion , the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences .. .
<meaning-changed> To have a clear and better discussion , the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences .. .
To have a clear and better discussion , the author summarizes the basic conditions of "Understanding", rethinks what "Understanding" is and how to proceed. The study in this paper provides a practical, theoretical basis and research methods for NLU. It also can be applied in large-scale and multi-type information processing in the artificial intelligence (AI) area .
meaning-changed
2010.12789
2
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
<clarity> Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
Knowledge graphs (KGs) have helped neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
clarity
2010.12872
1
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
<meaning-changed> Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems.
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and recommender systems.
meaning-changed
2010.12872
1
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems. Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners .
<clarity> Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems. Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners .
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and item recommendation. By using attention over the KG, such models can also " insights " to practitioners .
clarity
2010.12872
1
Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners .
<clarity> Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners .
Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " explain " to practitioners .
clarity
2010.12872
1
Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners .
<meaning-changed> Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners .
Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " which KG information was most relevant for making a given prediction .
meaning-changed
2010.12872
1
In this paper, we question the faithfulness of such symbolic explanations .
<clarity> In this paper, we question the faithfulness of such symbolic explanations .
In this paper, we question whether these models are really behaving as we expect .
clarity
2010.12872
1
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
<clarity> We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
clarity
2010.12872
1
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
<clarity> We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
clarity
2010.12872
1
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
<meaning-changed> We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics .
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original KG while significantly deviating from the original semantics .
meaning-changed
2010.12872
1
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics . In particular, we train a reinforcement learning policy to manipulate relation types or edge connections in a knowledge graph, such that the resulting downstream performance is maximally preserved. Across multiple models and tasks, our approach drastically alters knowledge graphs with little to no drop in performance. These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge .
<clarity> We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics . In particular, we train a reinforcement learning policy to manipulate relation types or edge connections in a knowledge graph, such that the resulting downstream performance is maximally preserved. Across multiple models and tasks, our approach drastically alters knowledge graphs with little to no drop in performance. These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge .
We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics and structure. Our findings raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge .
clarity
2010.12872
1
These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge .
<clarity> These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge .
These results raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations .
clarity
2010.12872
1
Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
<clarity> Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
clarity
2010.12872
2
By using attention over the KG, such models can also "explain" which KG information was most relevant for making a given prediction.
<meaning-changed> By using attention over the KG, such models can also "explain" which KG information was most relevant for making a given prediction.
By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction.
meaning-changed
2010.12872
2
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
<clarity> We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
clarity
2010.12872
2
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
<fluency> We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs , which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
fluency
2010.12872
2
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
<clarity> We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure.
clarity
2010.12872
2
Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
<clarity> Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and provide plausible explanations.
clarity
2010.12872
2
Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
<clarity> Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
Our findings raise doubts about KG-augmented models' ability to leverage KG information and give sensible explanations.
clarity
2010.12872
2
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference .
<clarity> Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference .
Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference .
clarity
2010.12873
1
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference .
<clarity> Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference .
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) for commonsense reasoning tasks, like question answering (QA) .
clarity
2010.12873
1
However, these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limited expressiveness of its relations, and retrieved facts irrelevant to the reasoning context. In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.
<clarity> However, these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limited expressiveness of its relations, and retrieved facts irrelevant to the reasoning context. In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.
However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge. To address these issues, we propose Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.
clarity
2010.12873
1
In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information. Our model learns a compact graph structure(comprising both extracted and generated edges) through filtering edges that are unhelpful to the reasoning process. We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies .
<meaning-changed> In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information. Our model learns a compact graph structure(comprising both extracted and generated edges) through filtering edges that are unhelpful to the reasoning process. We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies .
In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), a neural-symbolic model that reasons over both extracted (human-labeled) and generated facts within the same learned graph structure. Given a KG subgraph of extracted facts, HGN is jointly trained to generate complementary facts, encode relational information in the resulting "hybrid" subgraph, and filter out task-irrelevant facts. We demonstrate HGN's ability to produce contextually pertinent subgraphs by showing considerable performance gains across four commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies .
meaning-changed
2010.12873
1
We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies .
<clarity> We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies .
We show marked improvement on three commonsense reasoning benchmarks and a user study of fact validness and helpfulness .
clarity
2010.12873
1
It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models.
<clarity> It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models.
It has a long history in the field of natural language processing (NLP), and recently has re-gained significant attention thanks to the promising performance brought by deep learning models.
clarity
2011.00416
2
It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models.
<clarity> It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models.
It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep neural models.
clarity
2011.00416
2
In this paper, we present a systematic survey of the research done on neural text style transfer .
<fluency> In this paper, we present a systematic survey of the research done on neural text style transfer .
In this paper, we present a systematic survey of the research on neural text style transfer .
fluency
2011.00416
2
In this paper, we present a systematic survey of the research done on neural text style transfer . We have collected, summarized, and discussed nearly 70 representative articles since the first neural text style transfer work in 2017.
<meaning-changed> In this paper, we present a systematic survey of the research done on neural text style transfer . We have collected, summarized, and discussed nearly 70 representative articles since the first neural text style transfer work in 2017.
In this paper, we present a systematic survey of the research done on neural text style transfer , spanning over 100 representative articles since the first neural text style transfer work in 2017.
meaning-changed
2011.00416
2
Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
<clarity> Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
We discuss the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
clarity
2011.00416
2
Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
<clarity> Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data.
Overall, we have covered the task formulation, existing datasets and subtasks, evaluation , as well as the rich methodologies in the presence of parallel and non-parallel data.
clarity
2011.00416
2
We also provide discussions a variety of important topics regarding TST, which can shed light on new development in this field .
<fluency> We also provide discussions a variety of important topics regarding TST, which can shed light on new development in this field .
We also provide discussions on a variety of important topics regarding TST, which can shed light on new development in this field .
fluency
2011.00416
2
We also provide discussions a variety of important topics regarding TST, which can shed light on new development in this field .
<clarity> We also provide discussions a variety of important topics regarding TST, which can shed light on new development in this field .
We also provide discussions a variety of important topics regarding the future development of TST .
clarity
2011.00416
2
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
<meaning-changed> Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 60 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
meaning-changed
2011.02593
1
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
<meaning-changed> Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets . Furthermore, we demonstrate how to use the token-level hallucination labels to define a fine-grained loss over the target sequence in the low-resource machine translation and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
meaning-changed
2011.02593
1
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
<clarity> Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods.
Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements over strong baseline methods.
clarity
2011.02593
1
We also release our annotated data and code for future researchat URL
<clarity> We also release our annotated data and code for future researchat URL
We will release our annotated data and code for future researchat URL
clarity
2011.02593
1
We also release our annotated data and code for future researchat URL
<coherence> We also release our annotated data and code for future researchat URL
We also release our annotated data and code to support future research.
coherence
2011.02593
1
Recently, contextualized word embeddings outperform static word embeddings on many NLP tasks. However, we still do not know much about the mechanism inside these representations. Do they have any common patterns? If so, where do these patterns come from? We find that almost all the contextualized word vectors of BERT and RoBERTa have a commonpattern.
<clarity> Recently, contextualized word embeddings outperform static word embeddings on many NLP tasks. However, we still do not know much about the mechanism inside these representations. Do they have any common patterns? If so, where do these patterns come from? We find that almost all the contextualized word vectors of BERT and RoBERTa have a commonpattern.
In this work, we demonstrate that the contextualized word vectors of BERT and RoBERTa have a commonpattern.
clarity
2011.04393
2
We find that almost all the contextualized word vectors of BERT and RoBERTa have a commonpattern. For BERT, the 557^{th neuron-level method to analyze where these "tails" come from.
<coherence> We find that almost all the contextualized word vectors of BERT and RoBERTa have a commonpattern. For BERT, the 557^{th neuron-level method to analyze where these "tails" come from.
We find that almost all the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers. Namely, we find cases of persistent outlier neurons within BERT and RoBERTa's hidden state vectors that consistently bear the smallest or largest values in said vectors. In an attempt to investigate the source of this information, we introduce a neuron-level method to analyze where these "tails" come from.
coherence
2011.04393
2
For BERT, the 557^{th neuron-level method to analyze where these "tails" come from. We find that these "tails" are closely related to the positional information .
<clarity> For BERT, the 557^{th neuron-level method to analyze where these "tails" come from. We find that these "tails" are closely related to the positional information .
For BERT, the 557^{th neuron-level analysis method, which reveals that the outliers are closely related to the positional information .
clarity
2011.04393
2
We find that these "tails" are closely related to the positional information .
<clarity> We find that these "tails" are closely related to the positional information .
We find that these "tails" are closely related to information captured by positional embeddings .
clarity
2011.04393
2
We also investigate what will happen if we "cutting the tails" (zero-out). Our results show that "tails" are the major cause of anisotropy of vector space.
<meaning-changed> We also investigate what will happen if we "cutting the tails" (zero-out). Our results show that "tails" are the major cause of anisotropy of vector space.
We also pre-train the RoBERTa-base models from scratch and find that the outliers disappear without using positional embeddings. These outliers, we find, are the major cause of anisotropy of vector space.
meaning-changed
2011.04393
2
Our results show that "tails" are the major cause of anisotropy of vector space. After "cutting the tails", a word's different vectors are more similar to each other. The internal representations have a better ability to distinguish a word 's different senseswith the word-in-context (WiC) dataset. The performance on the word sense disambiguation task is better for BERT and unchanged for RoBERTa. We can also better induce phrase grammar from the vector space. These suggest that "tails" are less related to the sense and syntax information in vectors. These findings provide insights into the inner workings of contextualized word vectors .
<coherence> Our results show that "tails" are the major cause of anisotropy of vector space. After "cutting the tails", a word's different vectors are more similar to each other. The internal representations have a better ability to distinguish a word 's different senseswith the word-in-context (WiC) dataset. The performance on the word sense disambiguation task is better for BERT and unchanged for RoBERTa. We can also better induce phrase grammar from the vector space. These suggest that "tails" are less related to the sense and syntax information in vectors. These findings provide insights into the inner workings of contextualized word vectors .
Our results show that "tails" are the major cause of anisotropy of encoders' raw vector spaces, and clipping them leads to increased similarity across vectors. We demonstrate this in practice by showing that clipped vectors can more accurately distinguish word senses, as well as lead to better sentence embeddings when mean pooling. In three supervised tasks, we find that clipping does not affect the performance .
coherence
2011.04393
2
In this paper, we provide a fundamental theory for knowledge graph reasoning based on ending anchored rules.
<fluency> In this paper, we provide a fundamental theory for knowledge graph reasoning based on ending anchored rules.
In this paper, we provide a fundamental theory for knowledge graph reasoning based on the ending anchored rules.
fluency
2011.06174
1
Our theory provides precise reasons answering why or why not a triple is correct.
<clarity> Our theory provides precise reasons answering why or why not a triple is correct.
Our theory provides precise reasons explaining why or why not a triple is correct.
clarity
2011.06174
1
Then, we implement our theory by what we called the EARDict model.
<fluency> Then, we implement our theory by what we called the EARDict model.
Then, we implement our theory by what we call the EARDict model.
fluency
2011.06174
1
Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
<clarity> Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
clarity
2011.06174
1
Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
<clarity> Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion , including achieving a Hits@10 score of 80.38 percent on WN18RR.
clarity
2011.06174
1
Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
<meaning-changed> Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 96.6 percent on WN18RR.
meaning-changed
2011.06174
1
Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
<meaning-changed> Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
Results show that our EARDict model significantly outperforms all the benchmark models on three large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
meaning-changed
2011.06174
2
Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
<clarity> Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion . Especially, our model achieves a Hits@10 score of 96.6 percent on WN18RR.
clarity
2011.06174
2
In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework , that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application codes across heterogeneous accelerator resources.
<fluency> In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework , that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application codes across heterogeneous accelerator resources.
In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application codes across heterogeneous accelerator resources.
fluency
2011.10896
1
In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework , that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application codes across heterogeneous accelerator resources.
<meaning-changed> In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework , that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application codes across heterogeneous accelerator resources.
In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework , that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application host codes across heterogeneous accelerator resources.
meaning-changed
2011.10896
1
Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score .
<clarity> Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score .
Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows for a unified control flow for the host program to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score .
clarity
2011.10896
1
Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score .
<meaning-changed> Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score .
Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score . of the documentation of their work .
meaning-changed
2011.10896
1
Many online comments on social media platforms are hateful , humorous, or sarcastic.
<clarity> Many online comments on social media platforms are hateful , humorous, or sarcastic.
Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic , humorous, or sarcastic.
clarity
2011.11465
1
Many online comments on social media platforms are hateful , humorous, or sarcastic. The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models.
<clarity> Many online comments on social media platforms are hateful , humorous, or sarcastic. The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models.
Many online comments on social media platforms are hateful , humorous, or hateful. This sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models.
clarity
2011.11465
1
The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same.
<meaning-changed> The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same.
The sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same.
meaning-changed
2011.11465
1
A lot of research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism(Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context .
<meaning-changed> A lot of research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism(Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context .
A lot of research has already been done in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm. The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information. So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism .
meaning-changed
2011.11465
1
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter).
<clarity> The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter).
The proposed phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter).
clarity
2011.11465
1
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context .
<clarity> The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context .
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
clarity
2011.11465
1
Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic , humorous, or hateful.
<clarity> Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic , humorous, or hateful.
Many online comments on social media platforms are hateful , humorous, or hateful.
clarity
2011.11465
2
Many online reviews are sarcastic , humorous, or hateful. This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone.
<coherence> Many online reviews are sarcastic , humorous, or hateful. This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone.
Many online reviews are sarcastic , humorous, or sarcastic. The sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone.
coherence
2011.11465
2
This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several research has already been done in this field.
<clarity> This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several research has already been done in this field.
This sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done in this field.
clarity
2011.11465
2
Several research has already been done in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm . The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information .
<clarity> Several research has already been done in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm . The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information .
Several research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not the intra sentence contextual information .
clarity
2011.11465
2
These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information .
<meaning-changed> These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information .
These existing modules were able to solve the problem of capturing inter sentence contextual information but not much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism (Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context .
meaning-changed
2011.11465
2
So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism . The proposed model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
<clarity> So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism . The proposed model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words .
clarity
2011.11465
2
The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
<meaning-changed> The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset phrases responsible for invoking sarcasm. Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context .
meaning-changed
2011.11465
2
Discovering new intents is a crucial task in a dialogue system .
<style> Discovering new intents is a crucial task in a dialogue system .
Discovering new intents is a crucial task in dialogue systems .
style
2012.08987
1
Moreover, we propose an alignment strategy to tackle the label inconsistency during clustering assignments.
<clarity> Moreover, we propose an alignment strategy to tackle the label inconsistency during clustering assignments.
Moreover, we propose an alignment strategy to tackle the label inconsistency problem during clustering assignments.
clarity
2012.08987
1
( Code available at URL
<clarity> ( Code available at URL
( The code is available at URL
clarity
2012.08987
1
Natural Language Processing (NLP) systems learn harmful societal biases that cause them to extend and proliferate inequality widely, as they are deployed in more and more situations.
<clarity> Natural Language Processing (NLP) systems learn harmful societal biases that cause them to extend and proliferate inequality widely, as they are deployed in more and more situations.
Natural Language Processing (NLP) systems learn harmful societal biases that cause them to widely proliferate inequality as they are deployed in more and more situations.
clarity
2012.15859
1
To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models , which are used to monitor model behaviour and to guide efforts at debiasing.
<clarity> To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models , which are used to monitor model behaviour and to guide efforts at debiasing.
To address and combat this, the NLP community relies on a variety of metrics to identify and quantify bias in black-box models , which are used to monitor model behaviour and to guide efforts at debiasing.
clarity
2012.15859
1
To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models , which are used to monitor model behaviour and to guide efforts at debiasing.
<coherence> To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models , which are used to monitor model behaviour and to guide efforts at debiasing.
To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models and to guide efforts at debiasing.
coherence
2012.15859
1
This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias) .
<clarity> This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias) .
This research examines whether easy-to-measure intrinsic metrics correlate well to extrinsic metrics (which reflect real world bias) .
clarity
2012.15859
1
This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias) .
<clarity> This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias) .
This research examines whether intrinsic metrics (which are easy to measure) correlate well to real world extrinsic metrics .
clarity
2012.15859
1
We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in more than extremely specific settings .
<clarity> We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in more than extremely specific settings .
We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in all scenarios across tasks and languages .
clarity
2012.15859
1
We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement simpler and easier .
<clarity> We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement simpler and easier .
We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community increase effort into making downstream measurement simpler and easier .
clarity
2012.15859
1
We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement simpler and easier .
<meaning-changed> We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement simpler and easier .
We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement more feasible via creation of additional challenge sets and annotated test data. We additionally release code, a new intrinsic metric, and an annotated test set for gender bias for hatespeech .
meaning-changed
2012.15859
1
However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center.
<clarity> However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center.
However, their memory footprint, inference latency, and power consumption are prohibitive efficient inference at the edge, and even at the data center.
clarity
2101.01321
2
Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0 x for INT8 inference on a T4 GPU system as compared to FP32 inference.
<fluency> Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0 x for INT8 inference on a T4 GPU system as compared to FP32 inference.
Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 -4.0 x for INT8 inference on a T4 GPU system as compared to FP32 inference.
fluency
2101.01321
2
We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
<coherence> We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
In contrast, we propose an end-to-end non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
coherence
2102.07594
1
The model consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level representations from the speech.
<clarity> The model consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level representations from the speech.
The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the speech.
clarity
2102.07594
1
The encoder extracts high-level representations from the speech. The PDS uses positional encodings corresponding to tokensto convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
<clarity> The encoder extracts high-level representations from the speech. The PDS uses positional encodings corresponding to tokensto convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
The encoder extracts high-level representations from the whole speech signal rather than autoregressive modeling on tokens. Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
clarity
2102.07594
1
Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
<meaning-changed> Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
Further, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performance of speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model .
meaning-changed
2102.07594
1
In contrast, we propose an end-to-end non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
<meaning-changed> In contrast, we propose an end-to-end non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
meaning-changed
2102.07594
2
The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speechsignal rather than autoregressive modeling on tokens .
<meaning-changed> The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speechsignal rather than autoregressive modeling on tokens .
The model consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level representations from the whole speechsignal rather than autoregressive modeling on tokens .
meaning-changed
2102.07594
2
Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speechsignal rather than autoregressive modeling on tokens . Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics.
<meaning-changed> Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speechsignal rather than autoregressive modeling on tokens . Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics.
Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the speech. The PDS uses positional encodings corresponding to tokens to convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics.
meaning-changed
2102.07594
2
Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model .
<coherence> Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model .
Moreover, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
coherence
2102.07594
2
In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy with soft inductive biases in place of hard token boundaries .
<meaning-changed> In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy with soft inductive biases in place of hard token boundaries .
In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias .
meaning-changed
2103.06874
2
CANINE outperforms a comparable mBERT model by >= 1 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\% fewer model parameters.
<meaning-changed> CANINE outperforms a comparable mBERT model by >= 1 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\% fewer model parameters.
CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\% fewer model parameters.
meaning-changed
2103.06874
2