doc_id
stringlengths
4
10
revision_depth
int64
1
4
before_revision
stringlengths
135
9.03k
after_revision
stringlengths
144
8.89k
edit_actions
list
sents_char_pos
sequence
domain
stringclasses
3 values
2009.05426
1
The second edition of "Semantic Relations Between Nominals" ( by Vivi Nastase, Stan Szpakowicz, Preslav Nakov and Diarmuid 'O S'eaghdha ) will be published by URLan & Claypool . A new Chapter 5 of the book discusses relation classification/extraction in the deep-learning paradigm which arose after the first edition appeared. This is a preview of Chapter 5, made public by the kind permission of URLan & Claypool.
The second edition of "Semantic Relations Between Nominals" by Vivi Nastase, Stan Szpakowicz, Preslav Nakov and Diarmuid 'O S'eaghdha will be published early in 2021 by URLan & Claypool URL A new Chapter 5 of the book , by Vivi Nastase and Stan Szpakowicz, discusses relation classification/extraction in the deep-learning paradigm which arose after the first edition appeared. This is Chapter 5, made public by the kind permission of URLan & Claypool.
[ { "type": "D", "before": "(", "after": null, "start_char_pos": 60, "end_char_pos": 61, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": ")", "after": null, "start_char_pos": 136, "end_char_pos": 137, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "early in 2021", "start_char_pos": 156, "end_char_pos": 156, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ".", "after": "URL", "start_char_pos": 177, "end_char_pos": 178, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ", by Vivi Nastase and Stan Szpakowicz,", "start_char_pos": 207, "end_char_pos": 207, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "a preview of", "after": null, "start_char_pos": 337, "end_char_pos": 349, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] } ]
[ 0, 178, 328 ]
arxiv
2009.05664
1
Existing commonsense reasoning datasets for AI and NLP tasks fail to address an important aspect of human life: cultural differences. In this work, we introduce an approach that extends prior work on crowdsourcing commonsense knowledge by incorporating differences in knowledge that are attributable to cultural or national groups. We demonstrate the technique by collecting commonsense knowledge that surrounds three fairly universal rituals---coming-of-age, marriage, and funerals---across three different national groups: the United States , India, and the Philippines. Our pilot study expands the different types of relationships identified by existing work in the field of commonsense reasoning for commonplace events, and uses these new types to gather information that distinguishes the knowledge of the different groups. It also moves us a step closer towards building a machine that doesn't assume a rigid framework of universal (and likely Western-biased) commonsense knowledge, but rather has the ability to reason in a contextually and culturally sensitive way. Our hope is that cultural knowledge of this sort will lead to more human-like performance in NLP tasks such as question answering (QA) and text understanding and generation.
Existing commonsense reasoning datasets for AI and NLP tasks fail to address an important aspect of human life: cultural differences. In this work, we introduce an approach that extends prior work on crowdsourcing commonsense knowledge by incorporating differences in knowledge that are attributable to cultural or national groups. We demonstrate the technique by collecting commonsense knowledge that surrounds six fairly universal rituals---birth, coming-of-age, marriage, funerals, new year, and birthdays---across two national groups: the United States and India. Our study expands the different types of relationships identified by existing work in the field of commonsense reasoning for commonplace events, and uses these new types to gather information that distinguishes the knowledge of the different groups. It also moves us a step closer towards building a machine that doesn't assume a rigid framework of universal (and likely Western-biased) commonsense knowledge, but rather has the ability to reason in a contextually and culturally sensitive way. Our hope is that cultural knowledge of this sort will lead to more human-like performance in NLP tasks such as question answering (QA) and text understanding and generation.
[ { "type": "R", "before": "three fairly universal rituals---coming-of-age, marriage, and funerals---across three different", "after": "six fairly universal rituals---birth, coming-of-age, marriage, funerals, new year, and birthdays---across two", "start_char_pos": 412, "end_char_pos": 507, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ", India, and the Philippines. Our pilot", "after": "and India. Our", "start_char_pos": 543, "end_char_pos": 582, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 133, 331, 572, 828, 1073 ]
null
2009.08553
1
Conventional sparse retrieval methods such as TF-IDF and BM25 are simple and efficient, but solely rely on lexical overlap and fail to conduct semantic matching. Recent dense retrieval methods learn latent representations to tackle the lexical mismatch problem, while being more computationally expensive and sometimes insufficient for exact matching as they embed the entire text sequence into a single vector with limited capacity. In this paper, we present Generation-Augmented Retrieval (GAR), a query expansion method that augments a query with relevant contexts through text generation. We demonstrate on open-domain question answering (QA) that the generated contexts significantly enrich the semantics of the queries and thus GAR with sparse representations (BM25) achieves comparable or better performance than the current state-of-the-art dense method DPR cite{karpukhin2020dense}. We show that generating various contexts of a query is beneficial as fusing their results consistently yields a better retrieval accuracy. Moreover, GAR achieves the state-of-the-art performance of extractive QA on the Natural Questions and TriviaQA datasets when equipped with an extractive reader .
Conventional sparse retrieval methods such as TF-IDF and BM25 are simple and efficient, but solely rely on lexical overlap without semantic matching. Recent dense retrieval methods learn latent representations to tackle the lexical mismatch problem, while being more computationally expensive and insufficient for exact matching as they embed the text sequence into a single vector with limited capacity. In this paper, we present Generation-Augmented Retrieval (GAR), a query expansion method that augments a query with relevant contexts through text generation. We demonstrate on open-domain question answering that the generated contexts significantly enrich the semantics of the queries and thus GAR with sparse representations (BM25) achieves comparable or better performance than the state-of-the-art dense methods such as DPR cite{karpukhin2020dense}. We show that generating various contexts of a query is beneficial as fusing their results consistently yields better retrieval accuracy. Moreover, as sparse and dense representations are often complementary, GAR can be easily combined with DPR to achieve even better performance. Furthermore, GAR achieves the state-of-the-art performance on the Natural Questions and TriviaQA datasets under the extractive setting when equipped with an extractive reader , and consistently outperforms other retrieval methods when the same generative reader is used .
[ { "type": "R", "before": "and fail to conduct", "after": "without", "start_char_pos": 123, "end_char_pos": 142, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "sometimes", "after": null, "start_char_pos": 309, "end_char_pos": 318, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "D", "before": "entire", "after": null, "start_char_pos": 369, "end_char_pos": 375, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "(QA)", "after": null, "start_char_pos": 642, "end_char_pos": 646, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "current", "after": null, "start_char_pos": 824, "end_char_pos": 831, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "method", "after": "methods such as", "start_char_pos": 855, "end_char_pos": 861, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "a", "after": null, "start_char_pos": 1002, "end_char_pos": 1003, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "GAR", "after": "as sparse and dense representations are often complementary, GAR can be easily combined with DPR to achieve even better performance. Furthermore, GAR", "start_char_pos": 1041, "end_char_pos": 1044, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "of extractive QA", "after": null, "start_char_pos": 1087, "end_char_pos": 1103, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "under the extractive setting", "start_char_pos": 1151, "end_char_pos": 1151, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ", and consistently outperforms other retrieval methods when the same generative reader is used", "start_char_pos": 1192, "end_char_pos": 1192, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 161, 433, 592, 891, 1030 ]
arxiv
2009.09704
2
An end-to-end speech-to-text translation (ST) takes audio in a source language and outputs the text in a target language. Inspired by neuroscience, humans have perception systems and cognitive systems to process different information , we propose LUT, Listen-Understand-Translate, a unified framework with triple supervision to decouple the end-to-end speech-to-text translation task. In addition to the target language sentence translation loss, LUT includes two auxiliary supervising signals to guide the acoustic encoder to extracts acoustic features from the input, and the semantic encoder to extract semantic features relevant to the source transcription text. We do experiments on English-French, English-German and English-Chinese speech translation benchmarks and the results demonstrate the reasonability of LUT . Our code is available at URL
An end-to-end speech-to-text translation (ST) takes audio in a source language and outputs the text in a target language. Existing methods are limited by the amount of parallel corpus. Can we build a system to fully utilize signals in a parallel ST corpus? We are inspired by human understanding system which is composed of auditory perception and cognitive processing. In this paper , we propose Listen-Understand-Translate, (LUT), a unified framework with triple supervision signals to decouple the end-to-end speech-to-text translation task. LUT is able to guide the acoustic encoder to extract as much information from the auditory input. In addition, LUT utilizes a pre-trained BERT model to enforce the upper encoder to produce as much semantic information as possible, without extra data. We perform experiments on a diverse set of speech translation benchmarks, including Librispeech English-French, IWSLT English-German and TED English-Chinese . Our results demonstrate LUT achieves the state-of-the-art performance, outperforming previous methods. The code is available at URL
[ { "type": "R", "before": "Inspired by neuroscience, humans have perception systems and cognitive systems to process different information", "after": "Existing methods are limited by the amount of parallel corpus. Can we build a system to fully utilize signals in a parallel ST corpus? We are inspired by human understanding system which is composed of auditory perception and cognitive processing. In this paper", "start_char_pos": 122, "end_char_pos": 233, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "LUT,", "after": null, "start_char_pos": 247, "end_char_pos": 251, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "(LUT),", "start_char_pos": 281, "end_char_pos": 281, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "signals", "start_char_pos": 326, "end_char_pos": 326, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "In addition to the target language sentence translation loss, LUT includes two auxiliary supervising signals", "after": "LUT is able", "start_char_pos": 387, "end_char_pos": 495, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "extracts acoustic features from the input, and the semantic encoder to extract semantic features relevant to the source transcription text. We do experiments on", "after": "extract as much information from the auditory input. In addition, LUT utilizes a pre-trained BERT model to enforce the upper encoder to produce as much semantic information as possible, without extra data. We perform experiments on a diverse set of speech translation benchmarks, including Librispeech", "start_char_pos": 529, "end_char_pos": 689, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "IWSLT", "start_char_pos": 706, "end_char_pos": 706, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "TED", "start_char_pos": 726, "end_char_pos": 726, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "speech translation benchmarks and the results demonstrate the reasonability of LUT . Our", "after": ". Our results demonstrate LUT achieves the state-of-the-art performance, outperforming previous methods. The", "start_char_pos": 743, "end_char_pos": 831, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] } ]
[ 0, 121, 386, 668 ]
arxiv
2009.09737
2
Speech-to-text translation (ST), which directly translates the source language speech to the target language text, has attracted intensive attention recently. However, the combination of speech recognition and machine translation in a single model poses a heavy burden on the direct cross-modal cross-lingual mapping. To reduce the learning difficulty, we propose COnSecutive Transcription and Translation (COSTT), an integral framework for speech-to-text translation. Our method is verified on three mainstream datasets, including Augmented LibriSpeech English-French dataset, TED English-German dataset, and TED English-Chinese dataset. Experiments show that our proposed COSTT outperforms the previous state-of-the-art methods. Our code is available at URL
Speech-to-text translation (ST), which directly translates the source language speech to the target language text, has attracted intensive attention recently. However, the combination of speech recognition and machine translation in a single model poses a heavy burden on the direct cross-modal cross-lingual mapping. To reduce the learning difficulty, we propose COnSecutive Transcription and Translation (COSTT), an integral approach for speech-to-text translation. The key idea is to generate source transcript and target translation text with a single decoder. It benefits the model training so that additional large parallel text corpus can be fully exploited to enhance the speech translation training. Our method is verified on three mainstream datasets, including Augmented LibriSpeech English-French dataset, TED English-German dataset, and TED English-Chinese dataset. Experiments show that our proposed COSTT outperforms the previous state-of-the-art methods. The code is available at URL
[ { "type": "R", "before": "framework", "after": "approach", "start_char_pos": 427, "end_char_pos": 436, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "The key idea is to generate source transcript and target translation text with a single decoder. It benefits the model training so that additional large parallel text corpus can be fully exploited to enhance the speech translation training.", "start_char_pos": 469, "end_char_pos": 469, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Our", "after": "The", "start_char_pos": 732, "end_char_pos": 735, "major_intent": "style", "raw_intents": [ "style", "clarity", "fluency" ] } ]
[ 0, 158, 317, 468, 639, 731 ]
arxiv
2009.11616
1
We introduce texttt N-LTP , an open-source Python Chinese natural language processing toolkit supporting five basic tasks: Chinese word segmentation, part-of-speech tagging, named entity recognition, dependency parsing, and semantic dependency parsing. texttt N-LTP adopts the multi-task framework with the pre-trained model to capture the shared knowledge across all Chinese relevant tasks. In addition, we propose to use knowledge distillation where single-task models teach a multi-task model, helping the multi-task model surpass its single-task teachers. Finally, we provide fundamental tasks API and a visualization tool to make users easier to use and view the processing results directly. To the best of our knowledge, this is the first toolkit to support all Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at URL
We introduce N-LTP , an open-source Python Chinese natural language processing toolkit supporting five basic tasks: Chinese word segmentation, part-of-speech tagging, named entity recognition, dependency parsing, and semantic dependency parsing. N-LTP adopts the multi-task framework with the pre-trained model to capture the shared knowledge across all Chinese relevant tasks. In addition, we propose to use knowledge distillation where single-task models teach a multi-task model, helping the multi-task model surpass its single-task teachers. Finally, we provide fundamental tasks API and a visualization tool to make users easier to use and view the processing results directly. To the best of our knowledge, this is the first toolkit to support all Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at URL
[ { "type": "D", "before": "texttt", "after": null, "start_char_pos": 13, "end_char_pos": 19, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "texttt", "after": null, "start_char_pos": 253, "end_char_pos": 259, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 391, 559, 696, 798 ]
arxiv
2009.13267
1
The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability. Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution -- there are samples with much higher BLEU score comparing to the beam decoding output. To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR). Our EBR consistently improves the performance of the Transformer-based NMT: +3 BLEU points on Sinhala-English and +2.0 BLEU points on IWSLT'17 French-English tasks.
The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability. Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution -- there are samples with much higher BLEU score comparing to the beam decoding output. To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR). Our EBR consistently improves the performance of the Transformer-based NMT: +3 BLEU points on Sinhala-English , +2.0 BLEU points on IWSLT'17 French-English , and +1.7 BLEU points on WMT'19 German-English tasks.
[ { "type": "R", "before": "and", "after": ",", "start_char_pos": 1104, "end_char_pos": 1107, "major_intent": "coherence", "raw_intents": [ "fluency", "coherence", "coherence" ] }, { "type": "A", "before": null, "after": ", and +1.7 BLEU points on WMT'19 German-English", "start_char_pos": 1152, "end_char_pos": 1152, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 243, 265, 284, 302, 432, 687, 993 ]
arxiv
2009.13267
2
The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability. Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution -- there are samples with much higher BLEU score comparing to the beam decoding output. To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR). Our EBR consistently improves the performance of the Transformer-based NMT: + 3 BLEU points on Sinhala-English , + 2.0 BLEU points on IWSLT'17 French-English, and + 1.7 BLEU points on WMT' 19 German-English tasks.
The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability. Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution -- there are samples with much higher BLEU score comparing to the beam decoding output. To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR). We use both marginal energy models (over target sentence) and joint energy models (over both source and target sentences). Our EBR with the joint energy model consistently improves the performance of the Transformer-based NMT: + 4 BLEU points on IWSLT'14 German-English , + 3.0 BELU points on Sinhala-English, + 1.2 BLEU on WMT' 16 English-German tasks.
[ { "type": "R", "before": "Our EBR", "after": "We use both marginal energy models (over target sentence) and joint energy models (over both source and target sentences). Our EBR with the joint energy model", "start_char_pos": 994, "end_char_pos": 1001, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "3", "after": "4", "start_char_pos": 1072, "end_char_pos": 1073, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Sinhala-English", "after": "IWSLT'14 German-English", "start_char_pos": 1089, "end_char_pos": 1104, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "2.0 BLEU points on IWSLT'17 French-English, and", "after": "3.0 BELU points on Sinhala-English,", "start_char_pos": 1109, "end_char_pos": 1156, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "1.7 BLEU points", "after": "1.2 BLEU", "start_char_pos": 1159, "end_char_pos": 1174, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "19 German-English", "after": "16 English-German", "start_char_pos": 1183, "end_char_pos": 1200, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "fluency" ] } ]
[ 0, 243, 265, 284, 302, 432, 687, 993 ]
arxiv
2010.01061
1
For natural language processing (NLP) taskssuch as sentiment or topic classification, currently prevailing approaches heavily rely on pretraining large self-supervised models on massive external data resources. However, this methodology is being critiqued for: exceptional compute and pretraining data requirements ; diminishing returns on both large and small datasets; and importantly, favourable evaluation settings that overestimate performance differences. The core belief behind current methodology, coined `the bitter lesson' by R. Sutton, is that `compute scale-up beats data and compute-efficient algorithms', neglecting that progress in compute hardware scale-up is based almost entirely on the miniaturisation of resource consumption. We thus approach pretrainingfrom a miniaturisation perspective, such as not to require massive external data sources and models, or learned translations from continuous input embeddings to discrete labels. To minimise overly favourable evaluation, we examine learning on a long-tailed, low-resource, multi-label text classification dataset with noisy, highly sparse labels and many rare concepts. To this end, we propose a novel `dataset-internal' contrastive autoencoding approach to self-supervised pretraining and demonstrate marked improvements in zero-shot, few-shot and solely supervised learning performance; even under an unfavorable low-resource scenario, and without defaulting to large-scale external datasets for self-supervision. We also find empirical evidence that zero and few-shot learning markedly benefit from adding more ` dataset-internal' , self-supervised training signals, which is of practical importance when retrieving or computing on large external sources of such signals is infeasible .
For natural language processing 'text-to-text' tasks, the prevailing approaches heavily rely on pretraining large self-supervised models on massive external data sources, which incurs exceptional pretraining data requirements and a diminished ability to pretrain over small datasets. However, fundamental pretraining method capabilities like few to zero-shot learning or preserving minority concept (long-tail) prediction performance along with accordingly designed evaluation scenarios remain open challenges. We thus propose Contrastive Label-Embedding Self-Supervision (CLESS) pretraining, which enables pretraining from multiple magnitudes smaller, 'task internal' data only, while still strongly improving fully supervised, long-tail, few-shot and self-supervised zero-shot learning abilities. Accordingly, we analyse improvements in learning dynamics over baselines on a challenging long-tailed, low-resource, multi-label text classification scenario with noisy, highly sparse labels and many minority concepts. We find that long-tailed zero and few-shot learning markedly benefit from increasing ' dataset-internal' self-supervised pretraining signals, to help reduce the reliance on large external sources .
[ { "type": "R", "before": "(NLP) taskssuch as sentiment or topic classification, currently", "after": "'text-to-text' tasks, the", "start_char_pos": 32, "end_char_pos": 95, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "resources. However, this methodology is being critiqued for: exceptional compute and", "after": "sources, which incurs exceptional", "start_char_pos": 200, "end_char_pos": 284, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "; diminishing returns on both large and small datasets; and importantly, favourable evaluation settings that overestimate performance differences. The core belief behind current methodology, coined `the bitter lesson' by R. Sutton, is that `compute scale-up beats data and compute-efficient algorithms', neglecting that progress in compute hardware scale-up is based almost entirely on the miniaturisation of resource consumption. We thus approach pretrainingfrom a miniaturisation perspective, such as not to require massive external data sources and models, or learned translations from continuous input embeddings to discrete labels. To minimise overly favourable evaluation, we examine learning on a", "after": "and a diminished ability to pretrain over small datasets. However, fundamental pretraining method capabilities like few to zero-shot learning or preserving minority concept (long-tail) prediction performance along with accordingly designed evaluation scenarios remain open challenges. We thus propose Contrastive Label-Embedding Self-Supervision (CLESS) pretraining, which enables pretraining from multiple magnitudes smaller, 'task internal' data only, while still strongly improving fully supervised, long-tail, few-shot and self-supervised zero-shot learning abilities. Accordingly, we analyse improvements in learning dynamics over baselines on a challenging", "start_char_pos": 315, "end_char_pos": 1018, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "dataset", "after": "scenario", "start_char_pos": 1078, "end_char_pos": 1085, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "rare concepts. To this end, we propose a novel `dataset-internal' contrastive autoencoding approach to self-supervised pretraining and demonstrate marked improvements in zero-shot, few-shot and solely supervised learning performance; even under an unfavorable low-resource scenario, and without defaulting to large-scale external datasets for self-supervision. We also find empirical evidence that", "after": "minority concepts. We find that long-tailed", "start_char_pos": 1128, "end_char_pos": 1525, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "adding more `", "after": "increasing '", "start_char_pos": 1575, "end_char_pos": 1588, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1607, "end_char_pos": 1608, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "training signals, which is of practical importance when retrieving or computing", "after": "pretraining signals, to help reduce the reliance", "start_char_pos": 1625, "end_char_pos": 1704, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "of such signals is infeasible", "after": null, "start_char_pos": 1731, "end_char_pos": 1760, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] } ]
[ 0, 210, 316, 370, 461, 745, 951, 1142, 1361, 1488 ]
arxiv
2010.01061
2
For natural language processing ' text-to-text' tasks, the prevailing approaches heavily rely on pretraining large self-supervised models on massive external datasources, which incurs exceptional pretraining data requirements and a diminished ability to pretrain over small datasets. However, fundamental pretraining method capabilities like few to zero-shot learningor preserving minority concept ( long-tail ) prediction performance along with accordingly designed evaluation scenarios remain open challenges . We thus propose Contrastive Label-Embedding Self-Supervision (CLESS) pretraining , which enables pretraining from multiple magnitudes smaller, 'task internal' data only, while still strongly improving fully supervised, long-tail , few-shot and self-supervised zero-shot learning abilities. Accordingly, we analyse improvements in learning dynamics over baselines on a challenging long-tailed, low-resource, multi-label text classification scenario with noisy, highly sparse labels and many minority concepts . We find that long-tailed zero and few-shot learning markedly benefit from increasing 'dataset-internal' self-supervised pretraining signals, to help reduce the reliance on large external sources .
For natural language processing ` text-to-text' tasks, the prevailing approaches heavily rely on pretraining large self-supervised models on increasingly larger `task-external' data. Transfer learning from high-resource pretraining works well, but research has focused on settings with very large data and compute requirements, while the potential of efficient low-resource learning, without large `task-external' pretraining, remains under-explored. In this work, we evaluate against three core challenges for resource efficient learning. Namely, we analyze: (1) pretraining data (X) efficiency; (2) zero to few-shot label (Y) efficiency; and (3) long-tail generalization, since long-tail preservation has been linked to algorithmic fairness and because data in the tail is limited by definition. To address these challenges, we propose a data and compute efficient self-supervised, contrastive text encoder, pretrained on 60MB of `task-internal' text data, and compare it to RoBERTa, which was pretrained on 160GB of `task-external' text . We find our method outperforms RoBERTa, while pretraining and fine-tuning in a 1/5th of RoBERTa's fine-tuning time .
[ { "type": "R", "before": "'", "after": "`", "start_char_pos": 32, "end_char_pos": 33, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "massive external datasources, which incurs exceptional pretraining", "after": "increasingly larger `task-external' data. Transfer learning from high-resource pretraining works well, but research has focused on settings with very large data and compute requirements, while the potential of efficient low-resource learning, without large `task-external' pretraining, remains under-explored. In this work, we evaluate against three core challenges for resource efficient learning. Namely, we analyze: (1) pretraining", "start_char_pos": 141, "end_char_pos": 207, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "requirements and a diminished ability to pretrain over small datasets. However, fundamental pretraining method capabilities like few to zero-shot learningor preserving minority concept (", "after": "(X) efficiency; (2) zero to few-shot label (Y) efficiency; and (3)", "start_char_pos": 213, "end_char_pos": 399, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": ") prediction performance along with accordingly designed evaluation scenarios remain open challenges . We thus propose Contrastive Label-Embedding Self-Supervision (CLESS) pretraining , which enables pretraining from multiple magnitudes smaller, 'task internal' data only, while still strongly improving fully supervised,", "after": "generalization, since", "start_char_pos": 410, "end_char_pos": 731, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ", few-shot and self-supervised zero-shot learning abilities. Accordingly, we analyse improvements in learning dynamics over baselines on a challenging long-tailed, low-resource, multi-label text classification scenario with noisy, highly sparse labels and many minority concepts", "after": "preservation has been linked to algorithmic fairness and because data in the tail is limited by definition. To address these challenges, we propose a data and compute efficient self-supervised, contrastive text encoder, pretrained on 60MB of `task-internal' text data, and compare it to RoBERTa, which was pretrained on 160GB of `task-external' text", "start_char_pos": 742, "end_char_pos": 1020, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "that long-tailed zero and few-shot learning markedly benefit from increasing 'dataset-internal' self-supervised pretraining signals, to help reduce the reliance on large external sources", "after": "our method outperforms RoBERTa, while pretraining and fine-tuning in a 1/5th of RoBERTa's fine-tuning time", "start_char_pos": 1031, "end_char_pos": 1217, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] } ]
[ 0, 283, 512, 802, 1022 ]
arxiv
2010.12008
1
We propose a simple method to generate large amounts of multilingual question and answer pairs by a single generative model. These synthetic samples are then applied to augment the available gold multilingual ones to improve the performance of multilingual QA models on target languages. Our approach only requires existence of automatically translated samples from Englishto the target domain , thus removing the need for human annotations in the target languages . Experimental results show our proposed approach achieves significant gains in a number of multilingual datasets .
We propose a simple method to generate multilingual question and answer pairs on a large scale through the use of a single generative model. These synthetic samples can be used to improve the zero-shot performance of multilingual QA models on target languages. Our proposed multi-task training of the generative model only requires the training samples in English , thus removing the need for labeled samples in the target languages , making it applicable to far more languages than those with labeled data . Experimental results show our proposed approach achieves significant gains on several multilingual QA benchmarks, reducing the gap between zero-shot and supervised performance of QA models on various languages .
[ { "type": "D", "before": "large amounts of", "after": null, "start_char_pos": 39, "end_char_pos": 55, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "by a", "after": "on a large scale through the use of a", "start_char_pos": 95, "end_char_pos": 99, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "are then applied to augment the available gold multilingual ones to improve the", "after": "can be used to improve the zero-shot", "start_char_pos": 149, "end_char_pos": 228, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "approach only requires existence of automatically translated samples from Englishto the target domain", "after": "proposed multi-task training of the generative model only requires the training samples in English", "start_char_pos": 292, "end_char_pos": 393, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "human annotations", "after": "labeled samples", "start_char_pos": 423, "end_char_pos": 440, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": ", making it applicable to far more languages than those with labeled data", "start_char_pos": 465, "end_char_pos": 465, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in a number of multilingual datasets", "after": "on several multilingual QA benchmarks, reducing the gap between zero-shot and supervised performance of QA models on various languages", "start_char_pos": 543, "end_char_pos": 579, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] } ]
[ 0, 124, 287 ]
arxiv
2010.12008
2
We propose a simple method to generate multilingual question and answer pairs on a large scale through the use of a single generative model. These synthetic samples can be used to improve the zero-shot performance of multilingual QA models on target languages. Our proposed multi-task training of the generative model only requires the training samples in English, thus removing the need for labeled samples in the target languages, making it applicable to far more languages than those with labeled data. Experimental results show our proposed approach achieves significant gains on several multilingual QA benchmarks , reducing the gap between zero-shot and supervised performance of QA models on various languages.
We propose a simple method to generate multilingual question and answer pairs on a large scale through the use of a single generative model. These synthetic samples can be used to improve the zero-shot performance of multilingual QA models on target languages. Our proposed multi-task training of the generative model only requires the labeled training samples in English, thus removing the need for such samples in the target languages, making it applicable to far more languages than those with labeled data. Human evaluations indicate the majority of such samples are grammatically correct and sensible. Experimental results show our proposed approach can achieve large gains on the XQuAD dataset , reducing the gap between zero-shot and supervised performance of smaller QA models on various languages.
[ { "type": "A", "before": null, "after": "labeled", "start_char_pos": 336, "end_char_pos": 336, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "R", "before": "labeled", "after": "such", "start_char_pos": 393, "end_char_pos": 400, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "Human evaluations indicate the majority of such samples are grammatically correct and sensible.", "start_char_pos": 507, "end_char_pos": 507, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "achieves significant gains on several multilingual QA benchmarks", "after": "can achieve large gains on the XQuAD dataset", "start_char_pos": 556, "end_char_pos": 620, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "smaller", "start_char_pos": 688, "end_char_pos": 688, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 140, 260, 506 ]
arxiv
2010.12789
1
Natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information. This paper disassembles the information represented by natural language , analyzes the classification coding system of attribute information and the abstraction relation between attribute information and entities in the real world , constructs the storage model of information, and simulate the attribute information precessing process in one of the attribute spaces, interprets how the relations which represented by "Be", "Of", "Have", and so on are embodied in the information storage data structures and the corresponding data reading modes , reclassifies the sentences types from the perspective of task types and data reading modes. Then, simulated the understanding process (the information processing process) on a dialogue example. Finally, the author summarizes the basic conditions of understanding and gives out the definition of understanding from a personal point of view. The study in this paper provides a practical, theoretical basis and research methods for NLU.It also can be applied in large-scale, multi-type information processing in the artificial intelligence (AI) area .
First of all, please URLet all you knew about the lexical classification, then let's jump to the conclusion. This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks. Almost all data chunks are information sets. According to the difference of the set structures, data chunks can be further divided into attribute chunks and entity chunks. According to the different abstraction level and method, attribute chunks can be further divided into basic attribute chunks, extended attribute chunks, and advanced attribute chunks. All of the above classification principles are structural and functionalbased discrimination, instead of artificially divide lexical chunks into a noun, adjective, pronouns, and so on. Now, let's back to the normal study process. The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information. Therefore the study begins with disassembling the information represented by natural language and then discovered the classification coding system of attribute information , and the abstraction relations between attribute information and entities in the real world . To have a clear and better discussion, the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences .. .
[ { "type": "R", "before": "Natural", "after": "First of all, please URLet all you knew about the lexical classification, then let's jump to the conclusion. This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks. Almost all data chunks are information sets. According to the difference of the set structures, data chunks can be further divided into attribute chunks and entity chunks. According to the different abstraction level and method, attribute chunks can be further divided into basic attribute chunks, extended attribute chunks, and advanced attribute chunks. All of the above classification principles are structural and functionalbased discrimination, instead of artificially divide lexical chunks into a noun, adjective, pronouns, and so on. Now, let's back to the normal study process. The author believes natural", "start_char_pos": 0, "end_char_pos": 7, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "This paper disassembles the", "after": "Therefore the study begins with disassembling the", "start_char_pos": 124, "end_char_pos": 151, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": ", analyzes", "after": "and then discovered", "start_char_pos": 196, "end_char_pos": 206, "major_intent": "style", "raw_intents": [ "style", "style", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 265, "end_char_pos": 265, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "relation", "after": "relations", "start_char_pos": 286, "end_char_pos": 294, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "style" ] }, { "type": "R", "before": ", constructs the storage model of information, and simulate the attribute information precessing process in one of", "after": ". To have a clear and better discussion,", "start_char_pos": 356, "end_char_pos": 470, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "attribute spaces, interprets how the relations which represented by \"Be\", \"Of\", \"Have\", and so on are embodied in the information storage data structures and the corresponding", "after": "author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above", "start_char_pos": 475, "end_char_pos": 650, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": ", reclassifies the sentences types from the perspective of task types and data reading modes. Then, simulated the understanding process (the information processing process) on a dialogue example. Finally, the author summarizes the basic conditions of understanding and gives out the definition of understanding from a personal point of view. The study in this paper provides a practical, theoretical basis and research methods for NLU.It also can be applied in large-scale, multi-type information processing in the artificial intelligence (AI) area", "after": "can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences ..", "start_char_pos": 670, "end_char_pos": 1218, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 123, 197, 357, 763, 865, 1011, 1105 ]
arxiv
2010.12789
2
First of all, please URLet all you knew about the lexical classification, then let's jump to the conclusion. This paper reclassified lexical chunks into data chunks, structure chunks, and pointer chunks. Almost all data chunks are information sets. According to the difference of the set structures, data chunks can be further divided into attribute chunks and entity chunks. According to the different abstraction level and method, attribute chunks can be further divided into basic attribute chunks, extended attribute chunks, and advanced attribute chunks. All of the above classification principles are structural and functionalbased discrimination, instead of artificially divide lexical chunks into a noun, adjective, pronouns, and so on. Now, let's back to the normal study process. The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the information . Therefore the study begins with disassembling the information represented by natural language and then discovered the classification coding system of attribute information, and the abstraction relations between attribute information and entities in the real world. To have a clear and better discussion , the author constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences .. .
We must recognize that natural language is a way of information encoding, and it encodes not only the information but also the procedures for how information is processed. To understand natural language, the same as we conceive and design computer languages, the first step is to separate information (or data) and the processing procedures of information (or data). In natural language, some processing procedures of data are encoded directly as the structure chunk and the pointer chunk (this paper has reclassified lexical chunks as the data chunk, structure chunk, and the pointer chunk); some processing procedures of data imply in sentences structures; some requests of processing procedures are expressed by information senders and processed by information receivers. For the data parts, the classification encoding system of attribute information and the URLanization architecture (including constitutional structures of information sets and the hierarchy between the information sets) were discussed. In section 2, the theoretical part elaborated in section 2 has been verified in examples and proofed that the studies in this paper have achieved the goal of enabling machines to understand the information conveyed in the dialogue. In section 4 , the author summarizes the basic conditions of "Understanding", rethinks what "Understanding" is and how to proceed. The study in this paper provides a practical, theoretical basis and research methods for NLU. It also can be applied in large-scale and multi-type information processing in the artificial intelligence (AI) area .
[ { "type": "R", "before": "First of all, please URLet all you knew about the lexical classification, then let's jump to the conclusion. This paper", "after": "We must recognize that natural language is a way of information encoding, and it encodes not only the information but also the procedures for how information is processed. To understand natural language, the same as we conceive and design computer languages, the first step is to separate information (or data) and the processing procedures of information (or data). In natural language, some processing procedures of data are encoded directly as the structure chunk and the pointer chunk (this paper has", "start_char_pos": 0, "end_char_pos": 119, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "into data chunks, structure chunks, and pointer chunks. Almost all data chunks are", "after": "as the data chunk, structure chunk, and the pointer chunk); some processing procedures of data imply in sentences structures; some requests of processing procedures are expressed by information senders and processed by information receivers. For the data parts, the classification encoding system of attribute information and the URLanization architecture (including constitutional structures of", "start_char_pos": 148, "end_char_pos": 230, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "sets. According to the difference of the set structures, data chunks can be further divided into attribute chunks and entity chunks. According to the different abstraction level and method, attribute chunks can be further divided into basic attribute chunks, extended attribute chunks, and advanced attribute chunks. All of the above classification principles are structural and functionalbased discrimination, instead of artificially divide lexical chunks into a noun, adjective, pronouns, and so on. Now, let's back to the normal study process. The author believes natural language is one of the ways information is encoded and it has highly abstracted and conceptualized the", "after": "sets and the hierarchy between the information sets) were discussed. In section 2, the theoretical part elaborated in section 2 has been verified in examples and proofed that the studies in this paper have achieved the goal of enabling machines to understand the", "start_char_pos": 243, "end_char_pos": 920, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ". Therefore the study begins with disassembling the information represented by natural language and then discovered the classification coding system of attribute information, and the abstraction relations between attribute information and entities in the real world. To have a clear and better discussion", "after": "conveyed in the dialogue. In section 4", "start_char_pos": 933, "end_char_pos": 1237, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "constructed corresponding data storage models, and extract three kinds of data reading modes on those data storage models, they are the defining reading mode which is driven by the structural word: be, the set reading mode which is driven by the structural word: have, and the process reading mode which is driven by verbs. Sentences output by the above data reading modes can be further divided into the data description task, the data verification task, and the data search task, according to task types represented by these sentences ..", "after": "summarizes the basic conditions of \"Understanding\", rethinks what \"Understanding\" is and how to proceed. The study in this paper provides a practical, theoretical basis and research methods for NLU. It also can be applied in large-scale and multi-type information processing in the artificial intelligence (AI) area", "start_char_pos": 1251, "end_char_pos": 1790, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 108, 203, 248, 375, 559, 744, 789, 934, 1199, 1574 ]
arxiv
2010.12872
1
Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of neural-symbolic models applied to machine learning tasks, such as question answering and recommender systems. Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide " insights " to practitioners . In this paper, we question the faithfulness of such symbolic explanations . We demonstrate that, through a learned strategy (or even simple heuristics), one can produce deceptively perturbed symbolic structures which maintain the downstream performance of the original structure while significantly deviating from the original semantics . In particular, we train a reinforcement learning policy to manipulate relation types or edge connections in a knowledge graph, such that the resulting downstream performance is maximally preserved. Across multiple models and tasks, our approach drastically alters knowledge graphs with little to no drop in performance. These results raise doubts about the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge .
Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such models can also " explain " which KG information was most relevant for making a given prediction . In this paper, we question whether these models are really behaving as we expect . We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure. Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations .
[ { "type": "R", "before": "Symbolic knowledge (e.g., entities, relations, and facts in a knowledge graph) has become an increasingly popular component of", "after": "Knowledge graphs (KGs) have helped", "start_char_pos": 0, "end_char_pos": 126, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "applied to machine learning tasks, such as", "after": "improve performance on various knowledge-intensive tasks, like", "start_char_pos": 150, "end_char_pos": 192, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "R", "before": "recommender systems. Besides improving downstream performance, these symbolic structures (and their associated attention weights) are often used to help explain the model's predictions and provide", "after": "item recommendation. By using attention over the KG, such models can also", "start_char_pos": 216, "end_char_pos": 412, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "insights", "after": "explain", "start_char_pos": 415, "end_char_pos": 423, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "to practitioners", "after": "which KG information was most relevant for making a given prediction", "start_char_pos": 426, "end_char_pos": 442, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "the faithfulness of such symbolic explanations", "after": "whether these models are really behaving as we expect", "start_char_pos": 472, "end_char_pos": 518, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "learned strategy", "after": "reinforcement learning policy", "start_char_pos": 552, "end_char_pos": 568, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "symbolic structures", "after": "KGs", "start_char_pos": 636, "end_char_pos": 655, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "structure", "after": "KG", "start_char_pos": 714, "end_char_pos": 723, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "fluency" ] }, { "type": "R", "before": ". In particular, we train a reinforcement learning policy to manipulate relation types or edge connections in a knowledge graph, such that the resulting downstream performance is maximally preserved. Across multiple models and tasks, our approach drastically alters knowledge graphs with little to no drop in performance. These results", "after": "and structure. Our findings", "start_char_pos": 782, "end_char_pos": 1117, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the faithfulness of explanations provided by learned symbolic structures and the reliability of current neural-symbolic modelsin leveraging symbolic knowledge", "after": "KG-augmented models' ability to leverage KG information and provide plausible explanations", "start_char_pos": 1137, "end_char_pos": 1295, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] } ]
[ 0, 236, 444, 520, 783, 981, 1103 ]
arxiv
2010.12872
2
Knowledge graphs (KGs) have helped neural-symbolic models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We demonstrate that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure. Our findings raise doubts about KG-augmented models' ability to leverage KG information and provide plausible explanations.
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs , which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure. Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
[ { "type": "R", "before": "neural-symbolic", "after": "neural", "start_char_pos": 35, "end_char_pos": 50, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "KG-augmented", "start_char_pos": 202, "end_char_pos": 202, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "demonstrate", "after": "show", "start_char_pos": 384, "end_char_pos": 395, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 513, "end_char_pos": 513, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "KG's", "start_char_pos": 623, "end_char_pos": 623, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "leverage", "after": "reason about", "start_char_pos": 713, "end_char_pos": 721, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "provide plausible", "after": "give sensible", "start_char_pos": 741, "end_char_pos": 758, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] } ]
[ 0, 164, 298, 380, 648 ]
arxiv
2010.12873
1
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference . However, these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limited expressiveness of its relations, and retrieved facts irrelevant to the reasoning context. In this paper, we present a novel neural-symbolic model, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information. Our model learns a compact graph structure(comprising both extracted and generated edges) through filtering edges that are unhelpful to the reasoning process. We show marked improvement on three commonsense reasoning benchmarks and demonstrate the superiority of the learned graph structures with user studies .
Recently, neural-symbolic models have achieved noteworthy success in leveraging knowledge graphs (KGs) for commonsense reasoning tasks, like question answering (QA) . However, fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge. To address these issues, we propose Hybrid Graph Network (HGN), a neural-symbolic model that reasons over both extracted (human-labeled) and generated facts within the same learned graph structure. Given a KG subgraph of extracted facts, HGN is jointly trained to generate complementary facts, encode relational information in the resulting "hybrid" subgraph, and filter out task-irrelevant facts. We demonstrate HGN's ability to produce contextually pertinent subgraphs by showing considerable performance gains across four commonsense reasoning benchmarks and a user study of fact validness and helpfulness .
[ { "type": "R", "before": "architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external", "after": "models have achieved noteworthy success in leveraging", "start_char_pos": 26, "end_char_pos": 161, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "and obtained state-of-the-art results in tasks such as (commonsense ) question answering and natural language inference", "after": "for commonsense reasoning tasks, like question answering (QA)", "start_char_pos": 185, "end_char_pos": 304, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "these methods rely on quality and contextualized knowledge structures (i.e., fact triples) that are retrieved at the pre-processing stage but overlook challenges caused by incompleteness of a KG, limited expressiveness of its relations, and retrieved facts irrelevant to the reasoning context. In this paper, we present a novel neural-symbolic model, named", "after": "fact sparsity, inherent in human-annotated KGs, can hinder such models from retrieving task-relevant knowledge. To address these issues, we propose", "start_char_pos": 316, "end_char_pos": 672, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "which jointly generates feature representations for new triples (as a complement to existing edges in the KG) , determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information. Our model learns a compact graph structure(comprising both extracted and generated edges) through filtering edges that are unhelpful to the reasoning process. We show marked improvement on three", "after": "a neural-symbolic model that reasons over both extracted (human-labeled) and generated facts within the same learned graph structure. Given a KG subgraph of extracted facts, HGN is jointly trained to generate complementary facts, encode relational information in the resulting \"hybrid\" subgraph, and filter out task-irrelevant facts. We demonstrate HGN's ability to produce contextually pertinent subgraphs by showing considerable performance gains across four", "start_char_pos": 701, "end_char_pos": 1149, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "demonstrate the superiority of the learned graph structures with user studies", "after": "a user study of fact validness and helpfulness", "start_char_pos": 1187, "end_char_pos": 1264, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] } ]
[ 0, 306, 609, 954, 1113 ]
arxiv
2011.00416
2
Text style transfer (TST) is an important task in natural language generation (NLG), which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing (NLP), but recently it has gained significant attention thanks to the promising performance brought by deep learning models. In this paper, we present a systematic survey of the research done on neural text style transfer . We have collected, summarized, and discussed nearly 70 representative articles since the first neural text style transfer work in 2017. Overall, we have covered the task formulation, existing datasets and subtasks, evaluation metrics, and methods on parallel and non-parallel data. We also provide discussions a variety of important topics regarding TST, which can shed light on new development in this field . Our curated paper list is at URL
Text style transfer (TST) is an important task in natural language generation (NLG), which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others. It has a long history in the field of natural language processing (NLP), and recently has re-gained significant attention thanks to the promising performance brought by deep neural models. In this paper, we present a systematic survey of the research on neural text style transfer , spanning over 100 representative articles since the first neural text style transfer work in 2017. We discuss the task formulation, existing datasets and subtasks, evaluation , as well as the rich methodologies in the presence of parallel and non-parallel data. We also provide discussions on a variety of important topics regarding the future development of TST . Our curated paper list is at URL
[ { "type": "R", "before": "but recently it has gained", "after": "and recently has re-gained", "start_char_pos": 275, "end_char_pos": 301, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "learning", "after": "neural", "start_char_pos": 376, "end_char_pos": 384, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "done", "after": null, "start_char_pos": 455, "end_char_pos": 459, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": ". We have collected, summarized, and discussed nearly 70", "after": ", spanning over 100", "start_char_pos": 490, "end_char_pos": 546, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Overall, we have covered", "after": "We discuss", "start_char_pos": 628, "end_char_pos": 652, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "metrics, and methods on", "after": ", as well as the rich methodologies in the presence of", "start_char_pos": 718, "end_char_pos": 741, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "on", "start_char_pos": 802, "end_char_pos": 802, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "TST, which can shed light on new development in this field", "after": "the future development of TST", "start_char_pos": 843, "end_char_pos": 901, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 201, 392, 491, 627, 773, 903 ]
arxiv
2011.02593
1
Neural sequence models can generate highly fluent sentences but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input, which can cause a lack of trust in the model. To better assess the faithfulness of the machine outputs, we propose a new task to predict whether each token in the output sequence is hallucinated conditioned on the source input, and collect new manually annotated evaluation sets for this task. We also introduce a novel method for learning to model hallucination detection, based on pretrained language models fine tuned on synthetic data that includes automatically inserted hallucinations. Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 0.6 across all the benchmark datasets and achieve significant improvements in sentence-level hallucination scoring compared to baseline methods. We also release our annotated data and code for future researchat URL
Neural sequence models can generate highly fluent sentences but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input, which can cause a lack of trust in the model. To better assess the faithfulness of the machine outputs, we propose a new task to predict whether each token in the output sequence is hallucinated conditioned on the source input, and collect new manually annotated evaluation sets for this task. We also introduce a novel method for learning to model hallucination detection, based on pretrained language models fine tuned on synthetic data that includes automatically inserted hallucinations. Experiments on machine translation and abstract text summarization demonstrate the effectiveness of our proposed approach -- we obtain an average F1 of around 60 across all the benchmark datasets . Furthermore, we demonstrate how to use the token-level hallucination labels to define a fine-grained loss over the target sequence in the low-resource machine translation and achieve significant improvements over strong baseline methods. We will release our annotated data and code to support future research.
[ { "type": "R", "before": "0.6", "after": "60", "start_char_pos": 833, "end_char_pos": 836, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ". Furthermore, we demonstrate how to use the token-level hallucination labels to define a fine-grained loss over the target sequence in the low-resource machine translation", "start_char_pos": 871, "end_char_pos": 871, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in sentence-level hallucination scoring compared to", "after": "over strong", "start_char_pos": 909, "end_char_pos": 960, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "also", "after": "will", "start_char_pos": 982, "end_char_pos": 986, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "for future researchat URL", "after": "to support future research.", "start_char_pos": 1023, "end_char_pos": 1048, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] } ]
[ 0, 227, 475, 673, 978 ]
arxiv
2011.04393
2
Recently, contextualized word embeddings outperform static word embeddings on many NLP tasks. However, we still do not know much about the mechanism inside these representations. Do they have any common patterns? If so, where do these patterns come from? We find that almost all the contextualized word vectors of BERT and RoBERTa have a commonpattern. For BERT, the 557^{th neuron-level method to analyze where these "tails" come from. We find that these "tails" are closely related to the positional information . We also investigate what will happen if we "cutting the tails" (zero-out). Our results show that "tails" are the major cause of anisotropy of vector space. After "cutting the tails", a word's different vectors are more similar to each other. The internal representations have a better ability to distinguish a word 's different senseswith the word-in-context (WiC) dataset. The performance on the word sense disambiguation task is better for BERT and unchanged for RoBERTa. We can also better induce phrase grammar from the vector space. These suggest that "tails" are less related to the sense and syntax information in vectors. These findings provide insights into the inner workings of contextualized word vectors .
In this work, we demonstrate that the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers. Namely, we find cases of persistent outlier neurons within BERT and RoBERTa's hidden state vectors that consistently bear the smallest or largest values in said vectors. In an attempt to investigate the source of this information, we introduce a neuron-level analysis method, which reveals that the outliers are closely related to information captured by positional embeddings . We also pre-train the RoBERTa-base models from scratch and find that the outliers disappear without using positional embeddings. These outliers, we find, are the major cause of anisotropy of encoders' raw vector spaces, and clipping them leads to increased similarity across vectors. We demonstrate this in practice by showing that clipped vectors can more accurately distinguish word senses, as well as lead to better sentence embeddings when mean pooling. In three supervised tasks, we find that clipping does not affect the performance .
[ { "type": "R", "before": "Recently, contextualized word embeddings outperform static word embeddings on many NLP tasks. However, we still do not know much about the mechanism inside these representations. Do they have any common patterns? If so, where do these patterns come from? We find that almost all", "after": "In this work, we demonstrate that", "start_char_pos": 0, "end_char_pos": 278, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "of BERT and RoBERTa have a commonpattern. For BERT, the 557^{th", "after": "derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers. Namely, we find cases of persistent outlier neurons within BERT and RoBERTa's hidden state vectors that consistently bear the smallest or largest values in said vectors. In an attempt to investigate the source of this information, we introduce a", "start_char_pos": 311, "end_char_pos": 374, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "method to analyze where these \"tails\" come from. We find that these \"tails\"", "after": "analysis method, which reveals that the outliers", "start_char_pos": 388, "end_char_pos": 463, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the positional information", "after": "information captured by positional embeddings", "start_char_pos": 487, "end_char_pos": 513, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "investigate what will happen if we \"cutting the tails\" (zero-out). Our results show that \"tails\"", "after": "pre-train the RoBERTa-base models from scratch and find that the outliers disappear without using positional embeddings. These outliers, we find,", "start_char_pos": 524, "end_char_pos": 620, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "vector space. After \"cutting the tails\", a word's different vectors are more similar to each other. The internal representations have a better ability to distinguish a word 's different senseswith the word-in-context (WiC) dataset. The performance on the word sense disambiguation task is better for BERT and unchanged for RoBERTa. We can also better induce phrase grammar from the vector space. These suggest that \"tails\" are less related to the sense and syntax information in vectors. These findings provide insights into the inner workings of contextualized word vectors", "after": "encoders' raw vector spaces, and clipping them leads to increased similarity across vectors. We demonstrate this in practice by showing that clipped vectors can more accurately distinguish word senses, as well as lead to better sentence embeddings when mean pooling. In three supervised tasks, we find that clipping does not affect the performance", "start_char_pos": 658, "end_char_pos": 1232, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] } ]
[ 0, 93, 178, 212, 254, 352, 436, 515, 590, 671, 757, 889, 989, 1053, 1145 ]
null
2011.06174
1
Discovering precise and specific rules from knowledge graphs is regarded as an essential challenge, which can improve the performances of many downstream tasks and even provide new ways to approach some Natural Language Processing research topics. In this paper, we provide a fundamental theory for knowledge graph reasoning based on ending anchored rules. Our theory provides precise reasons answering why or why not a triple is correct. Then, we implement our theory by what we called the EARDict model. Results show that the EARDict model achieves new state-of-the-art performances on benchmark knowledge graph completion tasks, including a Hits@10 score of 80.38 percent on WN18RR.
Discovering precise and specific rules from knowledge graphs is regarded as an essential challenge, which can improve the performances of many downstream tasks and even provide new ways to approach some Natural Language Processing research topics. In this paper, we provide a fundamental theory for knowledge graph reasoning based on the ending anchored rules. Our theory provides precise reasons explaining why or why not a triple is correct. Then, we implement our theory by what we call the EARDict model. Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 334, "end_char_pos": 334, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "R", "before": "answering", "after": "explaining", "start_char_pos": 394, "end_char_pos": 403, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "called", "after": "call", "start_char_pos": 481, "end_char_pos": 487, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "the EARDict model achieves new state-of-the-art performances on benchmark", "after": "our EARDict model significantly outperforms all the benchmark models on two large datasets of", "start_char_pos": 525, "end_char_pos": 598, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "tasks, including", "after": ", including achieving", "start_char_pos": 626, "end_char_pos": 642, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "80.38", "after": "96.6", "start_char_pos": 662, "end_char_pos": 667, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 247, 357, 439, 506 ]
arxiv
2011.06174
2
Discovering precise and specific rules from knowledge graphs is regarded as an essential challenge, which can improve the performances of many downstream tasks and even provide new ways to approach some Natural Language Processing research topics. In this paper, we provide a fundamental theory for knowledge graph reasoning based on the ending anchored rules. Our theory provides precise reasons explaining why or why not a triple is correct. Then, we implement our theory by what we call the EARDict model. Results show that our EARDict model significantly outperforms all the benchmark models on two large datasets of knowledge graph completion , including achieving a Hits@10 score of 96.6 percent on WN18RR.
Discovering precise and specific rules from knowledge graphs is regarded as an essential challenge, which can improve the performances of many downstream tasks and even provide new ways to approach some Natural Language Processing research topics. In this paper, we provide a fundamental theory for knowledge graph reasoning based on the ending anchored rules. Our theory provides precise reasons explaining why or why not a triple is correct. Then, we implement our theory by what we call the EARDict model. Results show that our EARDict model significantly outperforms all the benchmark models on three large datasets of knowledge graph completion . Especially, our model achieves a Hits@10 score of 96.6 percent on WN18RR.
[ { "type": "R", "before": "two", "after": "three", "start_char_pos": 599, "end_char_pos": 602, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ", including achieving", "after": ". Especially, our model achieves", "start_char_pos": 648, "end_char_pos": 669, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] } ]
[ 0, 247, 360, 443, 508 ]
arxiv
2011.10896
1
Hardware-agnostic programming with high performance portability will be the bedrock for realizing the ubiquitous adoption of emerging accelerator technologies in future heterogeneous high-performance computing (HPC) systems, which is the key to achieving the next level of HPC performance on an expanding accelerator landscape. In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework , that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application codes across heterogeneous accelerator resources. The experiment results of evaluating eight widely used HPC subroutines based on Intel Xeon E5-2620 v4 CPUs, Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows the same hardware-agnostic application codes of the HPC kernels, without any change, to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score .
Hardware-agnostic programming with high performance portability will be the bedrock for realizing the ubiquitous adoption of emerging accelerator technologies in future heterogeneous high-performance computing (HPC) systems, which is the key to achieving the next level of HPC performance on an expanding accelerator landscape. In this paper, we present HALO 1.0, an open-ended extensible multi-agent software framework that implements a set of proposed hardware-agnostic accelerator orchestration (HALO) principles and a novel compute-centric message passing interface (C^2MPI) specification for enabling the portable and performance-optimized execution of hardware-agnostic application host codes across heterogeneous accelerator resources. The experiment results of evaluating eight widely used HPC subroutines based on Intel Xeon E5-2620 v4 CPUs, Intel Arria 10 GX FPGAs, and NVIDIA GeForce RTX 2080 Ti GPUs show that HALO 1.0 allows for a unified control flow for the host program to run across all the computing devices with a consistently maximum performance portability score of 1.0, which is 2x-861,883x higher than the OpenCL-based solution that suffers from an unstably low performance portability score . of the documentation of their work .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 420, "end_char_pos": 421, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "host", "start_char_pos": 690, "end_char_pos": 690, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "the same hardware-agnostic application codes of the HPC kernels, without any change,", "after": "for a unified control flow for the host program", "start_char_pos": 936, "end_char_pos": 1020, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ". of the documentation of their work", "start_char_pos": 1250, "end_char_pos": 1250, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] } ]
[ 0, 327, 740, 848 ]
arxiv
2011.11465
1
Many online comments on social media platforms are hateful , humorous, or sarcastic. The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism(Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context . The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context .
Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic , humorous, or hateful. This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several research has already been done in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm. The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information. So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism . The proposed model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
[ { "type": "R", "before": "Many online comments on social media platforms are hateful", "after": "Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic", "start_char_pos": 0, "end_char_pos": 58, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "sarcastic. The", "after": "hateful. This", "start_char_pos": 74, "end_char_pos": 88, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "comments (especially the short ones) alters their actual implied sentiments , which leads to misinterpretations by the existing sentiment analysis models. A lot of", "after": "short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several", "start_char_pos": 115, "end_char_pos": 278, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism(Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context", "after": "in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm. The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information. So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism", "start_char_pos": 310, "end_char_pos": 787, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "D", "before": "deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words", "after": null, "start_char_pos": 803, "end_char_pos": 914, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "phrases responsible for invoking sarcasm . Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context", "after": "model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset", "start_char_pos": 915, "end_char_pos": 1304, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 84, 269, 498, 789, 1091 ]
arxiv
2011.11465
2
Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic , humorous, or hateful. This sarcastic nature of these short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several research has already been done in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm . The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual information but not the intra sentence contextual information . So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism . The proposed model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset .
Many online comments on social media platforms are hateful , humorous, or sarcastic. The sarcastic nature of these comments (especially the short ones) alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models. A lot of research has already been done to detect sarcasm in the text using user-based, topical, and conversational information but not much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism (Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context . The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words phrases responsible for invoking sarcasm. Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context .
[ { "type": "R", "before": "Sentiment analysis of social media comments is very important for review analysis. Many online reviews are sarcastic", "after": "Many online comments on social media platforms are hateful", "start_char_pos": 0, "end_char_pos": 116, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "hateful. This", "after": "sarcastic. The", "start_char_pos": 132, "end_char_pos": 145, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "short texts change the actual sentiments of the review as predicted by a machine learning model that attempts to detect sentiment alone. Thus, having a model that is explicitly aware of these features should help it perform better on reviews that are characterized by them. Several", "after": "comments (especially the short ones) alters their actual implied sentiments, which leads to misinterpretations by the existing sentiment analysis models. A lot of", "start_char_pos": 172, "end_char_pos": 453, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "in this field. This paper deals with sarcasm detection on reddit comments. Several machine learning and deep learning algorithms have been applied for the same but each of these models only take into account the initial text instead of the conversation which serves as a better measure to determine sarcasm . The other shortcoming these papers have is they rely on word embedding for representing comments and thus do not take into account the problem of polysemy(A word can have multiple meanings based on the context in which it appears). These existing modules were able to solve the problem of capturing inter sentence contextual", "after": "to detect sarcasm in the text using user-based, topical, and conversational", "start_char_pos": 485, "end_char_pos": 1118, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "the intra sentence contextual information", "after": "much work has been done to use inter-sentence contextual information for detecting the same. This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism (Bi-ISCA) to capture inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context", "start_char_pos": 1139, "end_char_pos": 1180, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "So we propose a novel architecture which solves the problem of sarcasm detection by capturing intra sentence contextual information using a novel contextual attention mechanism . The proposed model solves the problem of polysemy also by using context enriched language modules like ELMO and BERT in its first component. This model comprises a total of three major components which takes into account inter sentence, intra sentence contextual information and at last use a convolutional neural network for capturing global contextual information for sarcasm detection. The proposed model was able to generate decent results and cleared showed potential to perform state of the art if trained on a larger dataset", "after": "The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words", "start_char_pos": 1183, "end_char_pos": 1893, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "phrases responsible for invoking sarcasm. Bi-ISCA generates state-of-the-art results on two widely used benchmark datasets for the sarcasm detection task (Reddit and Twitter). To the best of our knowledge, none of the existing state-of-the-art models use an inter-sentence contextual attention mechanism to detect sarcasm in the user-generated short text using only conversational context", "start_char_pos": 1894, "end_char_pos": 1894, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 82, 140, 308, 445, 499, 559, 793, 1025, 1182, 1361, 1502, 1750 ]
arxiv
2012.08987
1
Discovering new intents is a crucial task in a dialogue system . Most existing methods are limited in transferring the prior knowledge from known intents to new intents. These methods also have difficulties in providing high-quality supervised signals to learn clustering-friendly features for grouping unlabeled intents. In this work, we propose an effective method (Deep Aligned Clustering) to discover new intents with the aid of limited known intent data. Firstly, we leverage a few labeled known intent samples as prior knowledge to pre-train the model. Then, we perform k-means to produce cluster assignments as pseudo-labels. Moreover, we propose an alignment strategy to tackle the label inconsistency during clustering assignments. Finally, we learn the intent representations under the supervision of the aligned pseudo-labels. With an unknown number of new intents, we predict the number of intent categories by eliminating low-confidence intent-wise clusters. Extensive experiments on two benchmark datasets show that our method is more robust and achieves substantial improvements over the state-of-the-art methods.( Code available at URL
Discovering new intents is a crucial task in dialogue systems . Most existing methods are limited in transferring the prior knowledge from known intents to new intents. These methods also have difficulties in providing high-quality supervised signals to learn clustering-friendly features for grouping unlabeled intents. In this work, we propose an effective method (Deep Aligned Clustering) to discover new intents with the aid of limited known intent data. Firstly, we leverage a few labeled known intent samples as prior knowledge to pre-train the model. Then, we perform k-means to produce cluster assignments as pseudo-labels. Moreover, we propose an alignment strategy to tackle the label inconsistency problem during clustering assignments. Finally, we learn the intent representations under the supervision of the aligned pseudo-labels. With an unknown number of new intents, we predict the number of intent categories by eliminating low-confidence intent-wise clusters. Extensive experiments on two benchmark datasets show that our method is more robust and achieves substantial improvements over the state-of-the-art methods.( The code is available at URL
[ { "type": "R", "before": "a dialogue system", "after": "dialogue systems", "start_char_pos": 45, "end_char_pos": 62, "major_intent": "style", "raw_intents": [ "style", "style", "fluency" ] }, { "type": "A", "before": null, "after": "problem", "start_char_pos": 710, "end_char_pos": 710, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Code", "after": "The code is", "start_char_pos": 1131, "end_char_pos": 1135, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] } ]
[ 0, 64, 169, 321, 459, 558, 632, 741, 838, 972, 1129 ]
arxiv
2012.15859
1
Natural Language Processing (NLP) systems learn harmful societal biases that cause them to extend and proliferate inequality widely, as they are deployed in more and more situations. To address and combat this, the NLP community has come to rely on a variety of metrics to identify and quantify bias in black-box models , which are used to monitor model behaviour and to guide efforts at debiasing. Some of these metrics are intrinsic, and are measured in word embedding spaces, and some are extrinsic, which measure the bias present downstream in the tasks that the word embeddings are plugged into. This research examines whether intrinsic metrics (which are easy to measure) correlate well to extrinsic metrics (which reflect real world bias) . We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in more than extremely specific settings . We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community direct more effort into making downstream measurement simpler and easier .
Natural Language Processing (NLP) systems learn harmful societal biases that cause them to widely proliferate inequality as they are deployed in more and more situations. To address and combat this, the NLP community relies on a variety of metrics to identify and quantify bias in black-box models and to guide efforts at debiasing. Some of these metrics are intrinsic, and are measured in word embedding spaces, and some are extrinsic, which measure the bias present downstream in the tasks that the word embeddings are plugged into. This research examines whether easy-to-measure intrinsic metrics correlate well to real world extrinsic metrics . We measure both intrinsic and extrinsic bias across hundreds of trained models covering different tasks and experimental conditions and find that there is no reliable correlation between these metrics that holds in all scenarios across tasks and languages . We advise that efforts to debias embedding spaces be always also paired with measurement of downstream model bias, and suggest that that community increase effort into making downstream measurement more feasible via creation of additional challenge sets and annotated test data. We additionally release code, a new intrinsic metric, and an annotated test set for gender bias for hatespeech .
[ { "type": "R", "before": "extend and proliferate inequality widely,", "after": "widely proliferate inequality", "start_char_pos": 91, "end_char_pos": 132, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "has come to rely", "after": "relies", "start_char_pos": 229, "end_char_pos": 245, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": ", which are used to monitor model behaviour", "after": null, "start_char_pos": 320, "end_char_pos": 363, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "intrinsic metrics (which are easy to measure)", "after": "easy-to-measure intrinsic metrics", "start_char_pos": 632, "end_char_pos": 677, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "extrinsic metrics (which reflect real world bias)", "after": "real world extrinsic metrics", "start_char_pos": 696, "end_char_pos": 745, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "more than extremely specific settings", "after": "all scenarios across tasks and languages", "start_char_pos": 963, "end_char_pos": 1000, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "direct more", "after": "increase", "start_char_pos": 1150, "end_char_pos": 1161, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "simpler and easier", "after": "more feasible via creation of additional challenge sets and annotated test data. We additionally release code, a new intrinsic metric, and an annotated test set for gender bias for hatespeech", "start_char_pos": 1204, "end_char_pos": 1222, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 182, 398, 600, 747, 1002 ]
arxiv
2101.01321
2
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0 x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 -4.0 x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.
[ { "type": "D", "before": "for", "after": null, "start_char_pos": 221, "end_char_pos": 224, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "fluency" ] }, { "type": "R", "before": "- 4.0", "after": "-4.0", "start_char_pos": 1249, "end_char_pos": 1254, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 130, 286, 585, 741, 967, 1043, 1171, 1325 ]
arxiv
2102.07594
1
Attention-based encoder-decoder (AED) models have achieved promising performance in speech recognition. However, because the decoder predicts text tokens (such as characters or words) in an autoregressive manner, it is difficult for an AED model to predict all tokens in parallel. This makes the inference speed relatively slow. We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once). The model consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level representations from the speech. The PDS uses positional encodings corresponding to tokensto convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
Attention-based encoder-decoder (AED) models have achieved promising performance in speech recognition. However, because the decoder predicts text tokens (such as characters or words) in an autoregressive manner, it is difficult for an AED model to predict all tokens in parallel. This makes the inference speed relatively slow. In contrast, we propose an end-to-end non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once). The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speech signal rather than autoregressive modeling on tokens. Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performance of speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model .
[ { "type": "R", "before": "We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a", "after": "In contrast, we propose an end-to-end", "start_char_pos": 329, "end_char_pos": 700, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level", "after": "aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden", "start_char_pos": 805, "end_char_pos": 969, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "speech. The PDS uses positional encodings corresponding to tokensto convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further,", "after": "whole speech signal rather than autoregressive modeling on tokens. Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover,", "start_char_pos": 995, "end_char_pos": 1415, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "refine semantics from a large-scale pre-trained language model BERT for improving the performance", "after": "use a text-modal language model to improve the performance of speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model", "start_char_pos": 1469, "end_char_pos": 1566, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 103, 280, 328, 536, 667, 794, 882, 937, 1002, 1133, 1223, 1316, 1406 ]
null
2102.07594
2
Attention-based encoder-decoder (AED) models have achieved promising performance in speech recognition. However, because the decoder predicts text tokens (such as characters or words) in an autoregressive manner, it is difficult for an AED model to predict all tokens in parallel. This makes the inference speed relatively slow. In contrast, we propose an end-to-end non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once). The model aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden representations from the whole speechsignal rather than autoregressive modeling on tokens . Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover, we propose a cross-modal transfer learning method to use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model .
Attention-based encoder-decoder (AED) models have achieved promising performance in speech recognition. However, because the decoder predicts text tokens (such as characters or words) in an autoregressive manner, it is difficult for an AED model to predict all tokens in parallel. This makes the inference speed relatively slow. We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once). The model consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level representations from the speech. The PDS uses positional encodings corresponding to tokens to convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further, we propose a cross-modal transfer learning method to refine semantics from a large-scale pre-trained language model BERT for improving the performance .
[ { "type": "R", "before": "In contrast, we propose an end-to-end", "after": "We believe that because the encoder already captures the whole speech utterance, which has the token-level relationship implicitly, we can predict a token without explicitly autoregressive language modeling. When the prediction of a token does not rely on other tokens, the parallel prediction of all tokens in the sequence is realizable. Based on this idea, we propose a", "start_char_pos": 329, "end_char_pos": 366, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "aggregates encoded speech features into the hidden representations corresponding to each token with attention mechanisms. Thus, the model can capture the token relations by self-attention on the aggregated hidden", "after": "consists of an encoder, a decoder, and a position dependent summarizer (PDS). The three modules are based on basic attention blocks. The encoder extracts high-level", "start_char_pos": 471, "end_char_pos": 683, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "whole speechsignal rather than autoregressive modeling on tokens . Without explicitly autoregressive language modeling, this model predicts all tokens in the sequence in parallel so that the inference is efficient. Moreover,", "after": "speech. The PDS uses positional encodings corresponding to tokens to convert the acoustic representations into token-level representations. The decoder further captures token-level relationships with the self-attention mechanism. At last, the probability distribution on the vocabulary is computed for each token position. Therefore, speech recognition is re-formulated as a position-wise classification problem. Further,", "start_char_pos": 709, "end_char_pos": 933, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "use a text-modal language model to improve the performanceof speech-modal LASO by aligning token semantics. We conduct experiments on two scales of public Chinese speech datasets AISHELL-1 and AISHELL-2. Experimental results show that our proposed model achieves a speedup of about 50\\times and competitive performance, compared with the autoregressive transformer models. And the cross-modal knowledge transferring from the text-modal model can improve the performance of the speech-modal model", "after": "refine semantics from a large-scale pre-trained language model BERT for improving the performance", "start_char_pos": 987, "end_char_pos": 1482, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] } ]
[ 0, 103, 280, 328, 460, 592, 775, 923, 1094, 1190, 1359 ]
null
2103.06874
2
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy with soft inductive biases in place of hard token boundaries . To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by >= 1 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\% fewer model parameters.
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias . To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28\% fewer model parameters.
[ { "type": "R", "before": "with soft inductive biases in place of hard token boundaries", "after": "that operates either directly on characters or optionally uses subwords as a soft inductive bias", "start_char_pos": 596, "end_char_pos": 656, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ">= 1", "after": "2.8", "start_char_pos": 891, "end_char_pos": 895, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 162, 424, 658, 843 ]
arxiv
2103.06922
1
Recent studies indicate that NLU models are prone to rely on shortcut features for prediction . As a result, these models could potentially fail to generalize to real-world out-of-distribution scenarios . In this work, we show that the shortcut learning behavior can be explained by the long-tailed phenomenon . There are two findings: 1) Trained NLU models have strong preference for features located at the head of the long-tailed distribution, and 2) Shortcut features are picked up during very early few iterations of the model training. These two observations are further employed to formulate a measurement which can quantify the shortcut degree of each training sample. Based on this shortcut measurement, we propose a shortcut mitigation framework , to suppress the model from making overconfident predictions for samples with large shortcut degree. Experimental results on three NLU benchmarks demonstrate that our long-tailed distribution explanation accurately reflects the shortcut learning behavior of NLU models. Experimental analysis further indicates that our method can improve the generalization accuracy on OOD data, while preserving the accuracy on in distribution test data.
Recent studies indicate that NLU models are prone to rely on shortcut features for prediction , without achieving true language understanding . As a result, these models fail to generalize to real-world out-of-distribution data . In this work, we show that the words in the NLU training set can be modeled as a long-tailed distribution . There are two findings: 1) NLU models have strong preference for features located at the head of the long-tailed distribution, and 2) Shortcut features are picked up during very early few iterations of the model training. These two observations are further employed to formulate a measurement which can quantify the shortcut degree of each training sample. Based on this shortcut measurement, we propose a shortcut mitigation framework LGTR , to suppress the model from making overconfident predictions for samples with large shortcut degree. Experimental results on three NLU benchmarks demonstrate that our long-tailed distribution explanation accurately reflects the shortcut learning behavior of NLU models. Experimental analysis further indicates that LGTR can improve the generalization accuracy on OOD data, while preserving the accuracy on in-distribution data.
[ { "type": "A", "before": null, "after": ", without achieving true language understanding", "start_char_pos": 94, "end_char_pos": 94, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "D", "before": "could potentially", "after": null, "start_char_pos": 123, "end_char_pos": 140, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "scenarios", "after": "data", "start_char_pos": 194, "end_char_pos": 203, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "shortcut learning behavior can be explained by the", "after": "words in the NLU training set can be modeled as a", "start_char_pos": 237, "end_char_pos": 287, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "phenomenon", "after": "distribution", "start_char_pos": 300, "end_char_pos": 310, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "Trained", "after": null, "start_char_pos": 340, "end_char_pos": 347, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "A", "before": null, "after": "LGTR", "start_char_pos": 757, "end_char_pos": 757, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "our method", "after": "LGTR", "start_char_pos": 1074, "end_char_pos": 1084, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "in distribution test", "after": "in-distribution", "start_char_pos": 1171, "end_char_pos": 1191, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 96, 205, 312, 542, 677, 859, 1028 ]
arxiv
2103.09474
1
Previous works on expressive text-to-speech (TTS) have a limitation on robustness and speed when training and inferring. Such drawbacks mostly come from autoregressive decoding, which makes the succeeding step vulnerable to preceding error. To overcome this weakness , we propose STYLER, a novel expressive text-to-speech model with parallelized architecture. Expelling autoregressive decoding and introducing speech decomposition for encoding enables speech synthesis more robust even with high style transfer performance . Moreover, our novel noise modeling approach from audio using domain adversarial training and Residual Decoding enabled style transferwithout transferring noise . Our experiments prove the naturalness and expressiveness of our model from comparison with other parallel TTS models . Together we investigate our model 's robustness and speed by comparison with the expressive TTS model with autoregressive decoding .
Previous works on neural text-to-speech (TTS) have been tackled on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability. Although several approaches resolve some limitations, none of them has resolved all weaknesses at once. In this paper , we propose STYLER, an expressive and controllable text-to-speech model with robust speech synthesis and high speed. Excluding autoregressive decoding and introducing a novel audio-text aligning method called Mel Calibrator leads speech synthesis more robust on long, unseen data. Disentangled style factor modeling under supervision enlarges the controllability of synthesizing speech with fruitful expressivity . Moreover, our novel noise modeling pipeline using domain adversarial training and Residual Decoding enables noise-robust style transfer, decomposing the noise without any additional label. Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed, robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies. Synthesis samples of our model and experiment results are provided via our demo page .
[ { "type": "R", "before": "expressive", "after": "neural", "start_char_pos": 18, "end_char_pos": 28, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "a limitation on robustness and speed when training and inferring. Such drawbacks mostly come from autoregressive decoding, which makes the succeeding step vulnerable to preceding error. To overcome this weakness", "after": "been tackled on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability. Although several approaches resolve some limitations, none of them has resolved all weaknesses at once. In this paper", "start_char_pos": 55, "end_char_pos": 266, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "a novel expressive", "after": "an expressive and controllable", "start_char_pos": 288, "end_char_pos": 306, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "parallelized architecture. Expelling", "after": "robust speech synthesis and high speed. Excluding", "start_char_pos": 333, "end_char_pos": 369, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "speech decomposition for encoding enables speech", "after": "a novel audio-text aligning method called Mel Calibrator leads speech", "start_char_pos": 410, "end_char_pos": 458, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "even with high style transfer performance", "after": "on long, unseen data. Disentangled style factor modeling under supervision enlarges the controllability of synthesizing speech with fruitful expressivity", "start_char_pos": 481, "end_char_pos": 522, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "approach from audio", "after": "pipeline", "start_char_pos": 560, "end_char_pos": 579, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "enabled style transferwithout transferring noise . Our experiments prove the naturalness and expressiveness of our model from comparison with other parallel TTS models . Together we investigate our model 's robustness and speed by comparison with the expressive TTS model with autoregressive decoding", "after": "enables noise-robust style transfer, decomposing the noise without any additional label. Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed, robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies. Synthesis samples of our model and experiment results are provided via our demo page", "start_char_pos": 636, "end_char_pos": 936, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 120, 240, 359, 524, 686, 805 ]
arxiv
2103.09474
2
Previous works on neural text-to-speech (TTS) have been tackled on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability. Although several approaches resolve some limitations, none of them has resolved all weaknesses at once. In this paper, we propose STYLER, an expressive and controllable text-to-speech model with robust speech synthesisand high speed. Excluding autoregressive decoding and introducing a novel audio-text aligning method called Mel Calibrator leads speech synthesis more robust on long, unseen data. Disentangled style factor modeling under supervision enlarges the controllability of synthesizing speech with fruitful expressivity. Moreover, our novel noise modeling pipeline using domain adversarial training and Residual Decoding enables noise-robust style transfer, decomposing the noise without any additional label. Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed , robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies . Synthesis samples of our model and experiment results are provided via our demo page .
Previous works on neural text-to-speech (TTS) have been addressed on limited speed in training and inference time, robustness for difficult synthesis conditions, expressiveness, and controllability. Although several approaches resolve some limitations, there has been no attempt to solve all weaknesses at once. In this paper, we propose STYLER, an expressive and controllable TTS framework with high-speed and robust synthesis. Our novel audio-text aligning method called Mel Calibrator and excluding autoregressive decoding enable rapid training and inference and robust synthesis on unseen data. Also, disentangled style factor modeling under supervision enlarges the controllability in synthesizing process leading to expressive TTS. On top of it, a novel noise modeling pipeline using domain adversarial training and Residual Decoding empowers noise-robust style transfer, decomposing the noise without any additional label. Various experiments demonstrate that STYLER is more effective in speed and robustness than expressive TTS with autoregressive decoding and more expressive and controllable than reading style non-autoregressive TTS . Synthesis samples and experiment results are provided via our demo page , and code is available publicly .
[ { "type": "R", "before": "tackled", "after": "addressed", "start_char_pos": 56, "end_char_pos": 63, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "none of them has resolved", "after": "there has been no attempt to solve", "start_char_pos": 251, "end_char_pos": 276, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "text-to-speech model with robust speech synthesisand high speed. Excluding autoregressive decoding and introducing a", "after": "TTS framework with high-speed and robust synthesis. Our", "start_char_pos": 366, "end_char_pos": 482, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "leads speech synthesis more robust on long,", "after": "and excluding autoregressive decoding enable rapid training and inference and robust synthesis on", "start_char_pos": 538, "end_char_pos": 581, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "Disentangled", "after": "Also, disentangled", "start_char_pos": 595, "end_char_pos": 607, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "of synthesizing speech with fruitful expressivity. Moreover, our", "after": "in synthesizing process leading to expressive TTS. On top of it, a", "start_char_pos": 677, "end_char_pos": 741, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "enables", "after": "empowers", "start_char_pos": 828, "end_char_pos": 835, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Our extensive and various experiments demonstrate STYLER's effectiveness in the aspects of speed , robustness, expressiveness, and controllability by comparison with existing neural TTS models and ablation studies", "after": "Various experiments demonstrate that STYLER is more effective in speed and robustness than expressive TTS with autoregressive decoding and more expressive and controllable than reading style non-autoregressive TTS", "start_char_pos": 917, "end_char_pos": 1130, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "D", "before": "of our model", "after": null, "start_char_pos": 1151, "end_char_pos": 1163, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ", and code is available publicly", "start_char_pos": 1218, "end_char_pos": 1218, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 196, 300, 430, 594, 727, 916 ]
arxiv
2103.14972
4
The understanding of an offense is subjective and people may have different opinions about the offensiveness of a comment. Also , offenses and hate speech may occur through sarcasm, which hides the real intention of the comment and makes the decision of the annotators more confusing. Therefore, provide a well-structured annotation process is crucial to a better understanding of hate speech and offensive language phenomena, as well as supply better performance for machine learning classifiers. In this paper, we describe a corpus annotation process , which was guided by a linguist, and a hate speech skilled to support the identification of hate speech and offensive language on social media. In addition, we provide the first robust corpus of this kind for the Brazilian Portuguese language. The corpus was collected from Instagram posts of political personalities and manually annotated, being composed by 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive, and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology to the dictatorship, antisemitism, and fat phobia ). Each comment was annotated by three different annotators , and achieved high inter-annotator agreement. The new proposed annotation approach is also language and domain-independent (although it has been applied for Brazilian Portuguese ) .
The understanding of an offense is subjective and people may have different opinions about the offensiveness of a comment. Moreover , offenses and hate speech may occur through sarcasm, which hides the real intention of the comment and makes the decision of the annotators more confusing. Therefore, providing a well-structured annotation process is crucial to a better understanding of hate speech and offensive language phenomena, as well as supplying better performance for machine learning classifiers. In this paper, we describe a corpus annotation process proposed by a linguist, a hate speech specialist, and machine learning engineers in order to support the identification of hate speech and offensive language on social media. In addition, we provide the first robust dataset of this kind for the Brazilian Portuguese language. The corpus was collected from Instagram posts of political personalities and manually annotated, being composed by 7,000 annotated documents according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive, and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology to the dictatorship, antisemitism, and fatphobia ). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. The proposed annotation approach is also language and domain-independent nevertheless it is currently customized for Brazilian Portuguese .
[ { "type": "R", "before": "Also", "after": "Moreover", "start_char_pos": 123, "end_char_pos": 127, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "provide", "after": "providing", "start_char_pos": 296, "end_char_pos": 303, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "supply", "after": "supplying", "start_char_pos": 438, "end_char_pos": 444, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", which was guided", "after": "proposed", "start_char_pos": 553, "end_char_pos": 571, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "and", "after": null, "start_char_pos": 587, "end_char_pos": 590, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "skilled", "after": "specialist, and machine learning engineers in order", "start_char_pos": 605, "end_char_pos": 612, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "corpus", "after": "dataset", "start_char_pos": 739, "end_char_pos": 745, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "documents annotated", "after": "annotated documents", "start_char_pos": 919, "end_char_pos": 938, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "fat phobia", "after": "fatphobia", "start_char_pos": 1334, "end_char_pos": 1344, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1405, "end_char_pos": 1406, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "D", "before": "new", "after": null, "start_char_pos": 1456, "end_char_pos": 1459, "major_intent": "coherence", "raw_intents": [ "coherence", "style", "coherence" ] }, { "type": "R", "before": "(although it has been applied", "after": "nevertheless it is currently customized", "start_char_pos": 1529, "end_char_pos": 1558, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "D", "before": ")", "after": null, "start_char_pos": 1584, "end_char_pos": 1585, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 122, 284, 497, 697, 797, 975, 1347, 1451 ]
arxiv
2104.12265
2
This paper presents a new approach for offensive language and hate speech detection on social media. Our approach incorporate an offensive lexicon composed by implicit and explicit offensive and swearing expressions annotated with binary classes: context-dependent and context-independent offensive. Due to the severity of the hate speech and offensive comments in Brazil, and the lack of research in Portuguese, Brazilian Portuguese is the language used to validate our method. However, the proposal may be applied to any other language or domain. Based on the obtained results, the proposed approach showed high performance results overcoming the current baselines for European and Brazilian Portuguese.
This paper provides a new approach for offensive language and hate speech detection on social media. Our approach incorporates an offensive lexicon composed of implicit and explicit offensive and swearing expressions annotated with binary classes: context-dependent and context-independent offensive. Due to the severity of the hate speech and offensive comments in Brazil, and the lack of research in Portuguese, Brazilian Portuguese is the language used to validate the proposed method. Nevertheless, our proposal may be applied to any other language or domain. Based on the obtained results, the proposed approach showed high-performance overcoming the current baselines for European and Brazilian Portuguese.
[ { "type": "R", "before": "presents", "after": "provides", "start_char_pos": 11, "end_char_pos": 19, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "incorporate", "after": "incorporates", "start_char_pos": 114, "end_char_pos": 125, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "by", "after": "of", "start_char_pos": 156, "end_char_pos": 158, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "our method. However, the", "after": "the proposed method. Nevertheless, our", "start_char_pos": 467, "end_char_pos": 491, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "high performance results", "after": "high-performance", "start_char_pos": 609, "end_char_pos": 633, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 100, 299, 478, 548 ]
arxiv
2105.01542
1
Machine reading comprehension (MRC) is a sub-field in natural language processing or computational linguistics. MRC aims to help computers understand unstructured texts and then answer questions related to them. In this paper, we present a new Vietnamese corpus for conversational machine reading comprehension ( ViCoQA ), consisting of 10,000 questions with answers over 2,000 conversations about health news articles. We analyze ViCoQA in depth with different linguistic aspects. Then, we evaluate several baseline models about dialogue and reading comprehension on the ViCoQA corpus. The best model obtains an F1 score of 45.27\%, which is 30.91 points behind human performance (76.18\%), indicating that there is ample room for improvement.
Machine reading comprehension (MRC) is a sub-field in natural language processing or computational linguistics. MRC aims to help computers understand unstructured texts and then answer questions related to them. In this paper, we present a new Vietnamese corpus for conversational machine reading comprehension ( UIT-ViCoQA ), consisting of 10,000 questions with answers over 2,000 conversations about health news articles. We analyze UIT-ViCoQA in depth with different linguistic aspects. Then, we evaluate several baseline models about dialogue and reading comprehension on the UIT-ViCoQA corpus. The best model obtains an F1 score of 45.27\%, which is 30.91 points behind human performance (76.18\%), indicating that there is ample room for improvement.
[ { "type": "R", "before": "ViCoQA", "after": "UIT-ViCoQA", "start_char_pos": 313, "end_char_pos": 319, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "ViCoQA", "after": "UIT-ViCoQA", "start_char_pos": 431, "end_char_pos": 437, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "ViCoQA", "after": "UIT-ViCoQA", "start_char_pos": 572, "end_char_pos": 578, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 111, 211, 419, 481, 586 ]
arxiv
2105.01542
2
Machine reading comprehension (MRC) is a sub-field in natural language processing or computational linguistics. MRC aims to help computers understand unstructured texts and then answer questions related to them. In this paper, we present a new Vietnamese corpus for conversational machine reading comprehension (UIT-ViCoQA), consisting of 10,000 questions with answers over 2,000 conversations about health news articles. We analyze UIT-ViCoQA in depth with different linguistic aspects. Then, we evaluate several baseline models about dialogue and reading comprehension on the UIT-ViCoQA corpus. The best model obtains an F1 score of 45.27\%, which is 30.91 points behind human performance (76.18\%), indicating that there is ample room for improvement.
Machine reading comprehension (MRC) is a sub-field in natural language processing which aims to help computers understand unstructured texts and then answer questions related to them. In practice, conversation is an essential way to communicate and transfer information. To help machines understand conversation texts, we present UIT-ViCoQA - a new corpus for conversational machine reading comprehension in the Vietnamese language. This corpus consists of 10,000 questions with answers to over 2,000 conversations about health news articles. Then, we evaluate several baseline approaches for conversational machine comprehension on the UIT-ViCoQA corpus. The best model obtains an F1 score of 45.27\%, which is 30.91 points behind human performance (76.18\%), indicating that there is ample room for improvement.
[ { "type": "R", "before": "or computational linguistics. MRC", "after": "which", "start_char_pos": 82, "end_char_pos": 115, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "this paper, we present a new Vietnamese", "after": "practice, conversation is an essential way to communicate and transfer information. To help machines understand conversation texts, we present UIT-ViCoQA - a new", "start_char_pos": 215, "end_char_pos": 254, "major_intent": "coherence", "raw_intents": [ "meaning-changed", "coherence", "coherence" ] }, { "type": "R", "before": "(UIT-ViCoQA), consisting", "after": "in the Vietnamese language. This corpus consists", "start_char_pos": 311, "end_char_pos": 335, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "to", "start_char_pos": 369, "end_char_pos": 369, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "D", "before": "We analyze UIT-ViCoQA in depth with different linguistic aspects.", "after": null, "start_char_pos": 423, "end_char_pos": 488, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "models about dialogue and reading", "after": "approaches for conversational machine", "start_char_pos": 524, "end_char_pos": 557, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 111, 211, 422, 488, 597 ]
arxiv
10010
1
During a meeting of the Democratic Party of Serbia today , the Prime Minister Vojislav Kotunica said that he will not agree to Kosovo's independence and that the solution for Kosovo is wide autonomy. "When I say autonomy, that's more than autonomy Kosovo had prior to 1999, but it is still just autonomy and not independence," Kotunica said. After the war in 1999, Kosovo was put under UNMIK's administration, but it still officialy remains a province in Serbia. The province's Albanian majority wants full independence from Serbia. He also reaffirmed his stance on participation of Serbs in Kosovo's institutions: "For now, there is no room for Serbian representatives in institutions of Kosovo." The Serbian government had advised Serbs in Kosovo not to vote in the local parliamentary elections held last October. Kotunica said that the degree of participation depends on advancement in protection of human rights of the Serbs remaining in the province.
During a meeting of the Democratic Party of Serbia on Saturday , the Prime Minister Vojislav Kotunica said that he will not agree to Kosovo's independence , and that the solution for Kosovo is wide autonomy. "When I say autonomy, that's more than autonomy Kosovo had prior to 1999, but it is still just autonomy and not independence," Kotunica said. After the war in 1999, Kosovo was put under UNMIK's administration, but it still officially remains a province in Serbia. The province's Albanian majority wants full independence from Serbia. Kotunica also reaffirmed his stance on participation of Serbs in Kosovo's institutions: "For now, there is no room for Serbian representatives in institutions of Kosovo." The Serbian government had advised Serbs in Kosovo not to vote in the local parliamentary elections held last October. Kotunica said that the degree of participation depends on advancement in protection of human rights of the Serbs remaining in the province.
[ { "type": "R", "before": "today", "after": "on Saturday", "start_char_pos": 51, "end_char_pos": 56, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 149, "end_char_pos": 149, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "officialy", "after": "officially", "start_char_pos": 424, "end_char_pos": 433, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "He", "after": "Kotunica", "start_char_pos": 534, "end_char_pos": 536, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] } ]
[ 0, 200, 342, 463, 533, 698, 817 ]
news
10044
3
Five of the sites sued are registered in the U.S., and the sixth site is registered in Spain. The sites sued are Shuntv.net, Zonatracker.com, Btefnet.net, Scifi-classics.net, Cddvdheaven.co.uk, and Bragginrights.biz According to research by Envisional, recent surveys report downloads of TV programmes are up by 150\% in a year. Most of the downloads, 70\%, used BitTorrent sites. With smaller size than movies and broadband access, popular shows often appear within hours of airing on TV. CEO of the MPAA , Dan Glickman told the BBC "Since we began shutting these sites down, the time that it takes to download a file on BitTorrent has increased exponentially which means the experience of downloading copyrighted films and TV shows is not what it used to be. We intend to make it even worse. Protecting the television industry is essential."
Five of the sites sued are registered in the U.S., and the sixth site is registered in Spain. The sites sued are Shuntv.net, Zonatracker.com, Btefnet.net, Scifi-classics.net, Cddvdheaven.co.uk, and Bragginrights.biz . According to research by Envisional, recent surveys report downloads of TV programmes are up by 150\% in a year. Most of the downloads, 70\%, used BitTorrent sites. With the increasing ubiquity of broadband Internet access, popular television shows, which are smaller downloads than movies, often appear within hours of airing on TV. MPAA CEO Dan Glickman told the BBC , "Since we began shutting these sites down, the time that it takes to download a file on BitTorrent has increased exponentially , which means the experience of downloading copyrighted films and TV shows is not what it used to be. We intend to make it even worse. Protecting the television industry is essential."
[ { "type": "A", "before": null, "after": ".", "start_char_pos": 216, "end_char_pos": 216, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "smaller size than movies and broadband", "after": "the increasing ubiquity of broadband Internet", "start_char_pos": 387, "end_char_pos": 425, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "shows", "after": "television shows, which are smaller downloads than movies,", "start_char_pos": 442, "end_char_pos": 447, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "CEO of the MPAA ,", "after": "MPAA CEO", "start_char_pos": 491, "end_char_pos": 508, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 535, "end_char_pos": 535, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 663, "end_char_pos": 663, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 93, 329, 381, 490, 763, 796 ]
news
10050
1
Elections for local governments were held in Croatia today . The polls were closed at 19:00 local time (UTC+2). The results are not in yet, but exit polls and preliminary results were published by GONG and State Election Committee. According to GONG, an NGO observing the elections, Milan Bandić will be able to form a government in the city of Zagreb, since his list has won around 46\%. This would give them 27 of 51 seats in the capital city. Bandić is a head of the coalition list formed by Social Democrat Party of Croatia (SDP), Croatian Peasant Party (HSS), and Croatian Pensioner Party (HUS). The preliminary results from the State Election Committee show 40.90\% for Bandić at 23:30 local time. Branimir Glavaš's, who was recently expelled from Croatian Democratic Union (HDZ), list seems to be in lead in Osijek according to the exit polls, with 27.24\% (which would equal 9 seats in the local parliament) , Liberal Party has 18.47\% (6 seats) and Croatian Party of Justice is third with 14.5\% (4 seats). According to GONG, Social Democrat Party has a lead in Split with 29.02\% of votes (10 seats), while Croatian Democratic Union is following with 18.54\% (6 seats).
Elections for local governments were held in Croatia Sunday . The polls were closed at 19:00 local time (UTC+2). The results are not in yet, but exit polls and preliminary results were published by GONG and State Election Committee. According to GONG, an NGO observing the elections, Milan Bandić , can form a government in the city of Zagreb, since his list has won around 46\%. This would give the coalition 27 of 51 seats in the capital city. Bandić is a head of the coalition list formed by Social Democrat Party of Croatia (SDP), Croatian Peasant Party (HSS), and Croatian Pensioner Party (HUS). The preliminary results from the State Election Committee show 40.90\% for Bandić at 23:30 local time. Branimir Glavaš's, who was recently expelled from Croatian Democratic Union (HDZ), list seems to be in lead in Osijek according to the exit polls, with 27.24\% (which would equal 9 seats in the local parliament) . Liberal Party has 18.47\% (6 seats) , and Croatian Party of Justice is third with 14.5\% (4 seats). According to GONG, Social Democrat Party has a lead in split with 29.02\% of votes (10 seats), while Croatian Democratic Union is following with 18.54\% (6 seats).
[ { "type": "R", "before": "today", "after": "Sunday", "start_char_pos": 53, "end_char_pos": 58, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "will be able to", "after": ", can", "start_char_pos": 296, "end_char_pos": 311, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "them", "after": "the coalition", "start_char_pos": 405, "end_char_pos": 409, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ",", "after": ".", "start_char_pos": 916, "end_char_pos": 917, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 954, "end_char_pos": 954, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Split", "after": "split", "start_char_pos": 1072, "end_char_pos": 1077, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 60, 111, 231, 388, 445, 600, 703, 1016 ]
news
100524
1
360px|Executives at the press conference of 2008 APRICOT (from left to right): Philip Smith, Gaurab Raj Upadhaya (Chairman of Asia & Pacific Internet Association), Tony Teng, and Feipei Lai. The twelfth-annual Asia Pacific Regional Internet Conference on Operational Technologies (a.k.a APRICOT) , returned to Taiwan this year at Taipei Howard Plaza Hotel ; the first appearance since the 2003 conference held by the Taiwan Network Information Center (TWNIC) at the Taipei International Convention Center and Grand Hyatt Taipei. As Wikinews Journalist Rico Shen reported on the recent "Edison Chen photo scandal" incident, he commented: Workshops with varied topics and different technology levels took place from February 20 to 24, while several main seminars and speeches for industry, governmental, and academic executives ran from February 25 to 29. Several industry experts such as Wilfred Kwan (Chief Technology Officer of AsiaNetCom), Chung-laung Liu ( ISOC Taiwan Chapter Chair ), and Maemura Akinori (EC Chair of APNIC) will give several speeches related to the Internet industry at the conference.
360px|Executives at the press conference of 2008 APRICOT (from left to right): Philip Smith, Gaurab Raj Upadhaya (Chairman of Asia & Pacific Internet Association), Tony Teng, and Feipei Lai. The twelfth-annual Asia Pacific Regional Internet Conference on Operational Technologies (a.k.a APRICOT) returned to Taiwan this year at the Taipei Howard Plaza Hotel . This is the first appearance since the 2003 conference held by the Taiwan Network Information Center (TWNIC) at the Taipei International Convention Center and Grand Hyatt Taipei. When Wikinews journalist Rico Shen reported on the recent "Edison Chen photo scandal" incident, Tony Tien-lai Teng commented: Workshops with varied topics and different technology levels took place from February 20 to 24, while several main seminars and speeches for industry, governmental, and academic executives ran from February 25 to 29. Several industry experts such as Wilfred Kwan (Chief Technology Officer of AsiaNetCom), Chung-laung Liu ( Taiwan Chapter Chair of ISOC ), and Maemura Akinori (EC Chair of APNIC) will give several speeches related to the Internet industry at the conference.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 296, "end_char_pos": 297, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 330, "end_char_pos": 330, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "R", "before": ";", "after": ". This is", "start_char_pos": 357, "end_char_pos": 358, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "As Wikinews Journalist", "after": "When Wikinews journalist", "start_char_pos": 530, "end_char_pos": 552, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "he", "after": "Tony Tien-lai Teng", "start_char_pos": 624, "end_char_pos": 626, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "ISOC", "after": null, "start_char_pos": 961, "end_char_pos": 965, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "of ISOC", "start_char_pos": 987, "end_char_pos": 987, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] } ]
[ 0, 190, 358, 529, 854 ]
news
100534
1
There is also a Tree of Life Web Project, and a Catalogue of Life, the latter being the most successful; it is a compilation of 1,008,965 species, from 47 taxonomical databases. ZipcodeZoo offers over 3 million web pages describing species of plants and animals . Pages contain 258, 753 photostaken by 1, 369 photographers, 1, 104 sound recordings, and definitions of 234 , 888 terms. 70 , 725 Large photos can be zoomed and panned . ARKive collects media of species, and the All Species Foundation was a failed early attempt at a web catalog.
There is also a Tree of Life Web Project, and a Catalogue of Life, the latter being the most successful; it is a compilation of 1,008,965 species, from 47 taxonomical databases. ZipcodeZoo offers descriptions of over 3 million species of plants and animals , with 250 , 000 photos, maps, pronunciations, and definitions in 13 languages . ARKive collects media of species, and the All Species Foundation was a failed early attempt at a web catalog.
[ { "type": "A", "before": null, "after": "descriptions of", "start_char_pos": 196, "end_char_pos": 196, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "web pages describing", "after": null, "start_char_pos": 212, "end_char_pos": 232, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "D", "before": ". Pages contain 258, 753 photostaken by 1, 369 photographers, 1, 104 sound recordings, and definitions of 234", "after": null, "start_char_pos": 263, "end_char_pos": 372, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "888 terms. 70", "after": "with 250", "start_char_pos": 375, "end_char_pos": 388, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "725 Large photos can be zoomed and panned", "after": "000 photos, maps, pronunciations, and definitions in 13 languages", "start_char_pos": 391, "end_char_pos": 432, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 104, 177, 385 ]
news
100809
1
Oil in productionCrude oil prices in New York rose to a new record of $102.59 per barrel on Thursday, although the figure increased even more during after hours trading. In less than a month, prices have risen $10, leading the figures to above the record highs set during the 1980s (taking inflation into account) . Information provided by the International Energy Agency has said that the previous record was $102.53, with the figures being adjusted according to inflation levels. The weak dollar is seen as a major cause of this rise . Congressman Ron Paul of Texas, pointed out to Federal Reserve chairman Ben Bernanke in a committee meeting this week that despite the price of oil 's rapid ascent , it had remained flat when compared to the price of gold. Increasing demand for oil has also been cited as the cause for this increase. Violence in Nigeria earlier this year has led to a drop the country's production by almost a quarter. The most recent information produced by the Energy Information Administration has shown an increase in gasoline prices for all but one of the areas surveyed. A graph to show the increase in gasoline pricesThere have also been suggestions that reports of a fire at a National Gas Terminal may have contributed to the rising oil price. Time Evans from Citigroup Futures has stated he believes that this fire at the UK natural gas terminal is creating a strong push in the European market, and that is translating here [the US]."
An oil refineryCrude oil prices in New York rose to a new record of US $102.59 per barrel on Thursday, although the figure increased even more during after hours trading. In less than a month, prices have risen $10, leading the inflation adjusted prices above the record highs set during the 1980s . Information provided by the International Energy Agency has said that the previous record was $102.53, with the figures being adjusted according to inflation levels. While the weak dollar is seen as a major cause of this rise , Congressman Ron Paul of Texas, pointed out to Federal Reserve chairman Ben Bernanke in a committee meeting this week that despite the rapid ascent the of price of oil , it had remained flat when compared to the price of gold. A graph showing recent increases in gasoline prices Increasing demand for oil has also been cited as the cause for this increase. Violence in Nigeria earlier this year has led to a drop the country's production by almost a quarter. The most recent information produced by the Energy Information Administration has shown an increase in gasoline prices for all but one of the areas surveyed. There have also been suggestions that reports of a fire at a National Gas Terminal may have contributed to the rising oil price. Time Evans from Citigroup Futures said he believes " that this fire at the UK natural gas terminal is creating a strong push in the European market, and that is translating here [the US]."
[ { "type": "R", "before": "Oil in productionCrude oil", "after": "An oil refineryCrude oil", "start_char_pos": 0, "end_char_pos": 26, "major_intent": "clarity", "raw_intents": [ "clarity", "others", "clarity" ] }, { "type": "A", "before": null, "after": "US", "start_char_pos": 70, "end_char_pos": 70, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "fluency", "meaning-changed" ] }, { "type": "R", "before": "figures to", "after": "inflation adjusted prices", "start_char_pos": 228, "end_char_pos": 238, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "D", "before": "(taking inflation into account)", "after": null, "start_char_pos": 283, "end_char_pos": 314, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "The", "after": "While the", "start_char_pos": 483, "end_char_pos": 486, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ".", "after": ",", "start_char_pos": 537, "end_char_pos": 538, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "rapid ascent the of", "start_char_pos": 673, "end_char_pos": 673, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "'s rapid ascent", "after": null, "start_char_pos": 687, "end_char_pos": 702, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "A graph showing recent increases in gasoline prices", "start_char_pos": 762, "end_char_pos": 762, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "A graph to show the increase in gasoline pricesThere", "after": "There", "start_char_pos": 1101, "end_char_pos": 1153, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "has stated he believes", "after": "said he believes \"", "start_char_pos": 1311, "end_char_pos": 1333, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 170, 316, 482, 761, 840, 942, 1100, 1276 ]
news
100868
1
The H5N1 Avian Flu Virus has been found in a dead wild Canadian Goose in Abbotsbury Swannery in Dorset, England. This is the 11th case of the virus turning up in wild birds. The goose was found on February 25, 2008. 10 other cases of the virus have appeared in dead birds, all Mute Swans from the same area.
The H5N1 Avian Flu virus has been found in a dead wild Canadian Goose in Abbotsbury Swannery in Dorset, England. This is the eleventh case of the virus turning up in wild birds. The goose was discovered on February 25, 2008. Ten other cases of the virus have appeared in dead birds, all Mute Swans from the same area.
[ { "type": "R", "before": "Virus", "after": "virus", "start_char_pos": 19, "end_char_pos": 24, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "11th", "after": "eleventh", "start_char_pos": 125, "end_char_pos": 129, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": "found", "after": "discovered", "start_char_pos": 188, "end_char_pos": 193, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "R", "before": "10", "after": "Ten", "start_char_pos": 216, "end_char_pos": 218, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] } ]
[ 0, 112, 173, 215 ]
news
100873
1
Interpol has released an "Orange Notice" for the Southeast Asian Jemaah Islamiyah terrorist group leader Mas Selamat bin Kastari, who escaped from a detention center Wednesday. After the Orange Notice was released, Mas Selamat's picture and fingerprints were released to Interpol's 186 member countries. He remains at large.
Interpol has issued an "Orange Notice" for the leader of southeast Asian Jemaah Islamiyah , Mas Selamat bin Kastari, who escaped from a detention center Wednesday. After the "Orange Notice" was released, Mas Selamat's picture and fingerprints were released to Interpol's 186 member countries. He remains at large.
[ { "type": "R", "before": "released", "after": "issued", "start_char_pos": 13, "end_char_pos": 21, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Southeast", "after": "leader of southeast", "start_char_pos": 49, "end_char_pos": 58, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "terrorist group leader", "after": ",", "start_char_pos": 82, "end_char_pos": 104, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "Orange Notice", "after": "\"Orange Notice\"", "start_char_pos": 187, "end_char_pos": 200, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 176, 303 ]
news
101134
2
240px|The starting of 42K and 21K classes. The 2008 Taipei County Jin Shi International Marathon which was established in 1979 and suspended from 1986 to 2001 before being transformed from "Jin Shan" to "Jin Shi" with its racing scale expanded, crossed through 3 townships including Wanli, Jinshan, and Shihmen was held last Sunday ( Mar. 2 in Taipei Time). Not only professional runners from Asian European countries participated in this race, several enterprises, governmental and academic teams all supported this race by participating in 6K Fun Run classes. Even though the Central Weather Bureau warned of an impending dust storm, good weather graced this race with rain-free conditions. Another challenge for the runners in the 21K and 42K classes was cold winds due to the race route partially including coast line. 180px|Rik Ceulemans, a Belgian marathon runner, finally won Men's 42K Champion of this race. Finally, Belgian marathon runner Rik Ceulemans, men's record holder of this race, extended his record with 2H18m13s to win the men's champion , and Yu-fang Hsu, women's champion of 2007 ING Taipei Marathon, won her title at women's class with 2H53m39s. Coincidently, Wen-chien Wu and Wan-ling Wu, former champions of this race, all won the 2nd place in their own gender class. As a recap for the race, not only runners gave their best performances in this race, but several foreign teams - especially from Okinawa, Japan - played a great role adding a cultural exchange aspect to the event. Bros Sports, the URLanizer, supervised by Taipei County Government in this race, some senior managers from Bros Sports URLanizations from the sports industry could arrange the best integration of several marathon races in Taiwan in the future even though Bros Sports promoted a special attempt this year with 3 marathon races including Jin Shi, Dajia Mazu, and UMC Hsinchu stages.
240px|The start of 42K and 21K classes. The 2008 Taipei County Jin Shi International Marathon which was established in 1979 and suspended from 1986 to 2001 before being transformed from "Jin Shan" to "Jin Shi" with its racing scale expanded, crossed through three townships including Wanli, Jinshan, and Shihmen was held last Sunday ( March 2 in Taipei Time). Not only professional runners from Asian and European countries participated in this race, but several enterprises, governmental and academic teams also supported this race by participating in 6K Fun Run classes. Even though the Central Weather Bureau warned of an impending dust storm, good weather graced this race with rain-free conditions. Another challenge for the runners in the 21K and 42K classes was cold winds due to the race route partially including coast line. 180px|Rik Ceulemans, a Belgian marathon runner, finally won Men's 42K Champion of this race. Finally, Belgian marathon runner Rik Ceulemans, men's record holder of this race, extended his record with 2H18m13s to win the men's championship , and Yu-fang Hsu, women's champion of 2007 ING Taipei Marathon, won her title at women's class with 2H53m39s. Coincidently, Wen-chien Wu and Wan-ling Wu, former champions of this race, all won the second place in their own gender class. As a recap for the race, not only runners gave their best performances in this race, but several foreign teams especially from Okinawa, Japan played a great role adding a cultural exchange aspect to the event. Bros Sports, the URLanizer, supervised by Taipei County Government in this race, some senior managers from Bros Sports URLanizations from the sports industry could arrange the best integration of several marathon races in Taiwan in the future even though Bros Sports promoted a special attempt this year with three marathon races including Jin Shi, Dajia Mazu, and UMC Hsinchu stages.
[ { "type": "R", "before": "starting", "after": "start", "start_char_pos": 10, "end_char_pos": 18, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "3", "after": "three", "start_char_pos": 261, "end_char_pos": 262, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "Mar.", "after": "March", "start_char_pos": 334, "end_char_pos": 338, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "and", "start_char_pos": 399, "end_char_pos": 399, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "A", "before": null, "after": "but", "start_char_pos": 446, "end_char_pos": 446, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "all", "after": "also", "start_char_pos": 500, "end_char_pos": 503, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "champion", "after": "championship", "start_char_pos": 1051, "end_char_pos": 1059, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "2nd", "after": "second", "start_char_pos": 1258, "end_char_pos": 1261, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "D", "before": "-", "after": null, "start_char_pos": 1406, "end_char_pos": 1407, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "D", "before": "-", "after": null, "start_char_pos": 1439, "end_char_pos": 1440, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "3", "after": "three", "start_char_pos": 1818, "end_char_pos": 1819, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] } ]
[ 0, 42, 357, 563, 694, 824, 917, 1170, 1294, 1508 ]
news
10138
1
The BBC announced yesterday (16 May 2005) a trial of their new Interactive Media Player (iMP). Five thousand participants from across the UK were selected for this trial. Participants in the trial will be able to download selected BBC television and radio programmes. Approximately 190 hours of TV shows and 310 hours of radio programmes will be available for download. Some feature films and local programming will also be available. Ashley Highfield , BBC director of new media and technology has said this program "Could just be the iTunes for the broadcast industry, enabling our audience to access our TV and radio programmes on their terms - anytime, any place, any how." The BBC is funded by a mandatory licence fee on all televisions in the UK . The fee is 126.50 a year for colour TV and 42 a year for monochrome TV.
The BBC announced on Monday a trial of their new Interactive Media Player (iMP). Five thousand participants from across the U.K. were selected for this trial. Participants in the trial will be able to download selected BBC television and radio programmes. Approximately 190 hours of TV shows and 310 hours of radio programmes will be available for download. Some feature films and local programming will also be available. Ashley Highfield is the BBC's director of new media and technology . Highfield has said that this program "Could just be the iTunes for the broadcast industry, enabling our audience to access our TV and radio programmes on their terms - anytime, any place, any how." The BBC is funded by a mandatory licence fee on all televisions in the U.K . The fee is 126.50 a year for colour TV and 42 a year for monochrome TV.
[ { "type": "R", "before": "yesterday (16 May 2005)", "after": "on Monday", "start_char_pos": 18, "end_char_pos": 41, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "UK", "after": "U.K.", "start_char_pos": 138, "end_char_pos": 140, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": ", BBC", "after": "is the BBC's", "start_char_pos": 452, "end_char_pos": 457, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "has said", "after": ". Highfield has said that", "start_char_pos": 495, "end_char_pos": 503, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "UK", "after": "U.K", "start_char_pos": 749, "end_char_pos": 751, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 94, 170, 267, 369, 434, 677 ]
news
101803
1
Eliot Spitzer (D), Governator of New York New York Governator Eliot Spitzer has announced that he has had gender reassignment surgery , effective Monday. Spitzer was recently discovered to have been a client of a prostitution ring on numerous occasions. In a press conference, Governator Spitzer said: "I am deeply sorry that I did not live up to what was expected of me. To every New Yorker, and to all those who believed in what I tried to stand for, I sincerely apologize. I will try once again outside of politics to serve the common Good ." Public opinion polls in recent days have shown that 68\% of New Yorkers wanted Spitzer to reassign and many politicians had also called for his assassination . As a result Spitzer will be succeeded by Lieutenant Governator David Paterson, who will become New York's first African American governator . It is believed he will be the first illegally blind governator in the United States. Spitzer remains a superdelegate for Senator Hillary Clinton's presidential campaign until he officially steps down as governator . New York Governator Eliot Spitzer and Senator Hillary Rodham Clinton.
Eliot Spitzer (D), Governor of New York New York Governor Eliot Spitzer has announced that he has resigned , effective Monday. Spitzer was recently discovered to have been a client of a prostitution ring on numerous occasions. In a press conference, Governor Spitzer said: "I am deeply sorry that I did not live up to what was expected of me. To every New Yorker, and to all those who believed in what I tried to stand for, I sincerely apologize. I will try once again outside of politics to serve the common good ." Public opinion polls in recent days have shown that 68\% of New Yorkers wanted Spitzer to resign and many politicians had also called for his resignation . As a result Spitzer will be succeeded by Lieutenant Governor David Paterson, who will become New York's first African American governor . It is believed he will be the first legally blind governor in the United States. Spitzer remains a superdelegate for Senator Hillary Clinton's presidential campaign until he officially steps down as governor . New York Governor Eliot Spitzer and Senator Hillary Rodham Clinton.
[ { "type": "R", "before": "Governator", "after": "Governor", "start_char_pos": 19, "end_char_pos": 29, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Governator", "after": "Governor", "start_char_pos": 51, "end_char_pos": 61, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "had gender reassignment surgery", "after": "resigned", "start_char_pos": 102, "end_char_pos": 133, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Governator", "after": "Governor", "start_char_pos": 277, "end_char_pos": 287, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Good", "after": "good", "start_char_pos": 538, "end_char_pos": 542, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "reassign", "after": "resign", "start_char_pos": 636, "end_char_pos": 644, "major_intent": "fluency", "raw_intents": [ "meaning-changed", "fluency", "fluency" ] }, { "type": "R", "before": "assassination", "after": "resignation", "start_char_pos": 690, "end_char_pos": 703, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Governator", "after": "Governor", "start_char_pos": 758, "end_char_pos": 768, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "governator", "after": "governor", "start_char_pos": 835, "end_char_pos": 845, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "illegally blind governator", "after": "legally blind governor", "start_char_pos": 884, "end_char_pos": 910, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "governator", "after": "governor", "start_char_pos": 1051, "end_char_pos": 1061, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Governator", "after": "Governor", "start_char_pos": 1073, "end_char_pos": 1083, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 153, 253, 371, 475, 705, 847, 932, 1063 ]
news
102520
1
Diane Abbott: "Post offices are central spaces within a community." Yesterday, the U.K. Labour Party successfully prevented a bill restricting the closure of post offices from being passed, although the leading Labour party only had a majority of 20 due to 19 of it' s MP's voting against the party line. One of these was Diane Abbott, who told Wikinews why she voted for the bill. Mrs. Abbott told Wikinewsthat "Post offices are central spaces within a community." She added that they provide invaluable services and a point of contact for vulnerable people. She also claimed they were more important in places like her constituency which has a large number of elderly people. She said that "they are particularly important [in] places in areas like Hackney where there are fewer bank branches and a large elderly population who rely on the post office to collect their pensions." Mrs. Abbot also said that she was "appalled" to hear of the Post Office closures in her constituency. The Shadow Minister for Business, Enterprise & Regulatory Reform Alan Duncan of the Conservative Party was also contacted by Wikinews , he directed Wikinews to his website to for more information on his opinion. On his website he says that it is "simply not good enough" to allow post offices to close.
Diane Abbott: "Post offices are central spaces within a community." Yesterday, the U.K. Labour Party successfully prevented a bill restricting the closure of post offices from being passed, although the governing Labour party only had a majority of 20 due to 19 of its MPs' voting against the party line. One of these was Diane Abbott, who told Wikinews why she voted for the bill. She told Wikinews: "Post offices are central spaces within a community." She added that they provided invaluable services and a point of contact for vulnerable people. She also claimed they were more important in places like her constituency which has a large number of elderly people. She added: "they are particularly important [in] places in areas like Hackney where there are fewer bank branches and a large elderly population who rely on the post office to collect their pensions." She was "appalled" to hear of the Post Office closures in her constituency. The Shadow Minister for Business, Enterprise & Regulatory Reform Alan Duncan of the Conservative Party was also contacted by Wikinews . He directed Wikinews to his website for more information on his opinion. On his website he says that it is "simply not good enough" to allow post offices to close.
[ { "type": "R", "before": "leading", "after": "governing", "start_char_pos": 203, "end_char_pos": 210, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "it' s MP's", "after": "its MPs'", "start_char_pos": 263, "end_char_pos": 273, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Mrs. Abbott told Wikinewsthat", "after": "She told Wikinews:", "start_char_pos": 382, "end_char_pos": 411, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "provide", "after": "provided", "start_char_pos": 486, "end_char_pos": 493, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "said that", "after": "added:", "start_char_pos": 682, "end_char_pos": 691, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "Mrs. Abbot also said that she", "after": "She", "start_char_pos": 882, "end_char_pos": 911, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ", he", "after": ". He", "start_char_pos": 1118, "end_char_pos": 1122, "major_intent": "coherence", "raw_intents": [ "coherence", "fluency", "coherence" ] }, { "type": "D", "before": "to", "after": null, "start_char_pos": 1156, "end_char_pos": 1158, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 67, 304, 381, 465, 559, 677, 886, 983, 1195 ]
news
102922
1
Nicholas Sarkozy On the first day of his state visit to Great Britain, in a Presidential speech French President Nicolas Sarkozy praised the British nation and called for an "entente amicale" between Great Britain and France, in place of the existing, more formal entente cordiale agreement. Speaking, unusually, to both Houses of Parliament, Mr Sarkozy went on to praise the help Britain gave to France during both World Wars and said that "France would never URLet" and that it owed Britain a debt of gratitude. As well as praising Great Britain, Mr Sarkozy also promised to propose sending more French troops to Afghanistan at the Bucharest Summit, to be held later this year, although he did not specify the number of troops he was planning to send. During his two-day state visit, Mr Sarkozy will meet with British Prime Minister Gordon Brown and they are expected to discuss many different issues, from global finance to Afghanistan.
Nicholas Sarkozy in 2007. On the first day of his state visit to the United Kingdom, French President Nicolas Sarkozy gave a speech praising the British nation and called for an "entente amicale" between Great Britain and France, in place of the existing, more formal entente cordiale agreement. Speaking, unusually, to both Houses of Parliament, Sarkozy went on to praise the help Britain gave to France during both World Wars and said that "France would never URLet" and that it owed Britain a debt of gratitude. As well as praising Great Britain, Sarkozy also promised to propose sending more French troops to Afghanistan at the Bucharest Summit, to be held later this year, although he did not specify the number of troops he was planning to send. During his two-day state visit, Sarkozy will meet with British Prime Minister Gordon Brown and they are expected to discuss many different issues, from global finance to Afghanistan.
[ { "type": "A", "before": null, "after": "in 2007.", "start_char_pos": 17, "end_char_pos": 17, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Great Britain, in a Presidential speech", "after": "the United Kingdom,", "start_char_pos": 57, "end_char_pos": 96, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "praised", "after": "gave a speech praising", "start_char_pos": 130, "end_char_pos": 137, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "D", "before": "Mr", "after": null, "start_char_pos": 344, "end_char_pos": 346, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": "Mr", "after": null, "start_char_pos": 550, "end_char_pos": 552, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": "Mr", "after": null, "start_char_pos": 787, "end_char_pos": 789, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] } ]
[ 0, 292, 514, 754 ]
news
10439
1
Italian police officers and guardians are going on trial for alleged abuse of anti-globalization protesters during the G8 meeting in Genoa in 2001. Reports made by over 250 participants in the event detail verbal and physical abuse. The police allegedly took detainees to a holding center outside the city, where they say they were verbally and physically abused, threatened with rape , and spat at . In one alleged incident, the guards forced a woman's head down a toilet. Another allegation is that asphyxiating gas was sprayed at protesters in their cells. Over 60 people were injured while being taken into custody, three critically. One 23-year old protester was shot dead. Twenty-five of the 250 were brought up on minor charges of looting and ransacking. The police alleged that 93 protesters were in possession of dangerous weapons and resisted arrest, yet Genoa prosecutors dropped all charges against them. Meanwhile, the police face accusations of planting evidence , and fabricating charges against the demonstrators. Unfortunately, due to delays in bringing the authorities to justice, the 5-year statute of limitations has expired on the major charges .
Italian police officers and prison guards have been ordered to stand trial for the alleged abuse of anti-globalization protesters during the G8 summit in Genoa in 2001. Over 250 people people who attended the event have detailed verbal and physical abuse. The police took detainees to a holding center outside the city, where they say they were verbally and physically abused, spat at and threatened with rape . In one alleged incident, the guards forced a woman's head down a toilet. Another allegation is that asphyxiating gas was sprayed at protesters in their cells. Over 60 people were injured while being taken into custody, three critically. One 23-year old protester was shot dead. Twenty-five demostrators are standing trial on minor charges of looting and ransacking. The police alleged that 93 protesters were in possession of dangerous weapons and resisted arrest, yet Genoa prosecutors have dropped all charges against them. Meanwhile, the police have been charged with planting evidence and fabricating charges against the demonstrators. Unfortunately, due to delays in bringing the authorities to justice, the five-year statute of limitations is expectd to expire on most of the more serious charges before rulings are given .
[ { "type": "R", "before": "guardians are going on trial for", "after": "prison guards have been ordered to stand trial for the", "start_char_pos": 28, "end_char_pos": 60, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "meeting", "after": "summit", "start_char_pos": 122, "end_char_pos": 129, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Reports made by over", "after": "Over", "start_char_pos": 148, "end_char_pos": 168, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "participants in the event detail", "after": "people people who attended the event have detailed", "start_char_pos": 173, "end_char_pos": 205, "major_intent": "style", "raw_intents": [ "style", "style", "clarity" ] }, { "type": "D", "before": "allegedly", "after": null, "start_char_pos": 244, "end_char_pos": 253, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "spat at and", "start_char_pos": 364, "end_char_pos": 364, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": ", and spat at", "after": null, "start_char_pos": 386, "end_char_pos": 399, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "of the 250 were brought up", "after": "demostrators are standing trial", "start_char_pos": 692, "end_char_pos": 718, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "have", "start_char_pos": 884, "end_char_pos": 884, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "face accusations of planting evidence ,", "after": "have been charged with planting evidence", "start_char_pos": 941, "end_char_pos": 980, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "5-year", "after": "five-year", "start_char_pos": 1105, "end_char_pos": 1111, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "has expired on the major charges", "after": "is expectd to expire on most of the more serious charges before rulings are given", "start_char_pos": 1135, "end_char_pos": 1167, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] } ]
[ 0, 147, 232, 401, 474, 560, 638, 679, 762, 918, 1031 ]
news
104857
1
Image:US-DeptOfJustice-Seal.svg|Seal of the United States Department of Justice ]] Wikinews has learned that a United States Department of Justice (DOJ) IP Address has been blocked on Wikipedia after making edits to an article which were considered "vandalism". In two separate instances, the IP address from the DOJ removed information from the Wikipedia article about the organization Committee for Accuracy in Middle East Reporting in America (CAMERA), regarding an attempt by the organization to secretly gain influence on the site . The IP address has been confirmed by Wikinews to be registered and used by the DOJ located in Washington, D.C.
Image:US-DeptOfJustice-Seal.svg|Seal of the United States Department of Justice ]] Wikinews has learned that a United States Department of Justice (DOJ) IP Address has been blocked on Wikipedia after making edits to an article which were considered "vandalism". The article was about the organization Committee for Accuracy in Middle East Reporting in America (CAMERA), and the information removed was regarding an attempt by CAMERA to secretly gain influence on Wikipedia itself. The IP address from the DOJ removed information from the Wikipedia article in two separate instances . The IP address has been confirmed by Wikinews to be registered and used by the DOJ located in Washington, D.C.
[ { "type": "R", "before": "In two separate instances, the IP address from the DOJ removed information from the Wikipedia article", "after": "The article was", "start_char_pos": 262, "end_char_pos": 363, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "and the information removed was", "start_char_pos": 456, "end_char_pos": 456, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the organization", "after": "CAMERA", "start_char_pos": 481, "end_char_pos": 497, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "the site", "after": "Wikipedia itself. The IP address from the DOJ removed information from the Wikipedia article in two separate instances", "start_char_pos": 528, "end_char_pos": 536, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 261, 538 ]
news
104857
2
Image:US-DeptOfJustice-Seal.svg|Seal of the United States Department of Justice ]] Wikinews has learned that a United States Department of Justice (DOJ) IP Address has been blocked on Wikipedia after making edits to an article which were considered "vandalism". The article was about the organization Committee for Accuracy in Middle East Reporting in America (CAMERA), and the information removed was regarding an attempt by CAMERA to secretly gain influence on Wikipedia itself. The IP address from the DOJ removed information from the Wikipedia article in two separate instances . The IP address has been confirmed by Wikinews to be registered and used by the DOJ located in Washington, D.C.
Image:US-DeptOfJustice-Seal.svg|Seal of the United States Department of Justice ]] Wikinews has learned that a United States Department of Justice (DOJ) IP Address has been blocked on Wikipedia after making edits to an article which were considered "vandalism". In two separate instances, the IP address from the DOJ removed information from the Wikipedia article about the organization Committee for Accuracy in Middle East Reporting in America (CAMERA), regarding an attempt by the organization to secretly gain influence on the site . The IP address has been confirmed by Wikinews to be registered and used by the DOJ located in Washington, D.C.
[ { "type": "R", "before": "The article was", "after": "In two separate instances, the IP address from the DOJ removed information from the Wikipedia article", "start_char_pos": 262, "end_char_pos": 277, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "and the information removed was", "after": null, "start_char_pos": 370, "end_char_pos": 401, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "CAMERA", "after": "the organization", "start_char_pos": 426, "end_char_pos": 432, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Wikipedia itself. The IP address from the DOJ removed information from the Wikipedia article in two separate instances", "after": "the site", "start_char_pos": 463, "end_char_pos": 581, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 261, 480, 583 ]
news
105134
1
240px| Several companies just looked the vision of WiMAX but ignore its threats. In the picture is MTube, innovated by Science and Technology Advisory Group of Executive Yuan of the Republic of China (Taiwan). Computers and technology bring several conveniences to the modern lifestyle , but it creates risks like computer viruses and hacking. Many people want to enjoy the conveniences brought by new technology, but they may not be aware of its problems. WiMAX , which is a communication standard is mainly promoted by the Taipei Computer Association, in addition to the Industrial Technology Research Institute and the Ministry of Economic Affairs of the Republic of China , is considered by some to be the next generation in mobile devices. Since several Internet crimes are progressively respected by several information security companies , some of them disputed on security of WiMAX. After all, everyone wants to welcome this new technology but is afraid of its problem to bring about security issues. Although some (security) companies analyzed statistics on potential issues to dispute the future of WiMAX, but the question is how will they present their analysis? With opportunities from two press conferences in Taiwan, Wikinews Journalist Rico Shen briefly and successively interviewed Hsiao-wen Hung, President of Microsoft Research Asia and Li Chang, Deputy Secretary General of Taipei Computer Association. With their experiences in the IT industry, they agreed to comment on this question. __NOTOC__ Hsiao-wen Hung: Information security isn't built within a day . Hsiao-wen Hung: The information security industry isn't built within a simple solution. (Interview audio clip) The improvements of information technology firmly brought on opportunities, but information security is particularly respected by companies from security industry, As of this , how about the information security? Prior to the (Visual Studio 2008 and Windows Server 2008) launch, although WiMAX, a next-generation networking technology, will bring on the growth of mobile users, but its future was ever disputed by some companies from information security industry. Hung: Safety and flexibility , its relation is like a paddle . If something wanted to be linkable anywhere, there were still some considerations on security . It's like a house which anyone can enter in it . If you want to ensure its safety on your house, you should doseveral hard works . That's why the security is a must-have trend in reality. As of information securitywhat you mentioned about before , relatively, if a newly-proposed future technology wasn't normally populated, it's a golden opportunity for companies when some security companies properly disputed on its future. Hung: Even though it's safe to lock everything all, but it is inconvenience to lock (them all) . Therefore, some key elements like encryption, password authentication, policy, and some Web 2.0 solutions played key roles in information security. There were some involuntary and deliberate attackers on the Internet , if anyone want to prevent them, a complete planning should do priory . That's why the information security industry is built with several solutions and long-term experiences but not a simple solution in a day. Li Chang: Convergences will be brought for security and IT industries . Li Chang: Convergences will be brought for security and IT industries besides of changes on WiMAX. (Interview audio clip) What's your opinion on several disputes from IS companies during the "2008 Asia-pacific Information Security Forum"? Chang: Our member companies will do attempts on trend-changing. As of our board committee, because (our member) companies mostly manufactured notebook computers, if a common consensus from several senior officers from IT industrywill be established , I think the objects of TCA will be more and more cleared . What's your opinion on Hsiao-wen Hung's word, "When people want to enjoy a new technology, some protections should be necessity "? Chang: It's an undoubted truth. When a new technology was under development, companies mostly looked on its vision rather than its hidden factors like information or data security. As the Internet just began in a mature infrastructure, the public might not know its critical issue on security factor. As of WiMAX which you mentioned, even though it's under development, but you will not imagine what it will happen in the future. That's the opportunity for security industry.
240px| Some companies see the possibilities of WiMAX but ignore its threats. Pictured is MTube, developed by the Science and Technology Advisory Group of Executive Yuan of the Republic of China (Taiwan). Computers and technology make our modern lifestyle more convenient, but they also create new risks like computer viruses and hacking. Many people want to enjoy the convenience of new technology, but they may not be aware of its problems. WiMAX is being promoted in Taiwan by the Taipei Computer Association, the Industrial Technology Research Institute and the Ministry of Economic Affairs of the Republic of China . It is considered by some to be the communication standard for the next generation of mobile devices. Information security companies are concerned increasingly by Internet crime, and some of them question the security of WiMAX. After all, everyone wants to welcome this new technology but is afraid of its potential to cause security issues. Some companies have analyzed potential security issues which threaten the success of WiMAX. But what have they found? Taking the opportunity of two press conferences in Taiwan, Wikinews Journalist Rico Shen briefly interviewed both Hsiao-wen Hung, President of Microsoft Research Asia and Li Chang, Deputy Secretary General of Taipei Computer Association. With their experience in the IT industry, they agreed to comment on this question. __NOTOC__ Hsiao-wen Hung: Information security isn't built in a day Hsiao-wen Hung: The information security industry isn't built within a simple solution. (Interview audio clip) Improvements in information technology have certainly brought opportunities, but information security is of particular concern to companies in the security industry. Since this is so, what is the state of information security? Prior to the launch of (Visual Studio 2008 and Windows Server 2008) ... although WiMAX, a next-generation networking technology, will bring about growth of mobile users, its future has been questioned by some companies in the information security industry. Hung: Safety and flexibility are like a paddle with two blades. If you want something to be connectible anywhere, there are security considerations . It's like a house which anyone can enter . If you want to ensure the security of your building, there are some difficult things you need to do . That's why security is really a must-have thing. As for information security, as you mentioned earlier , relatively, if a newly-proposed future technology wasn't normally populated, it's a golden opportunity for companies when some security companies properly disputed on its future. Hung: Even though it's secure to lock everything down, it is inconvenient . Therefore, some key elements like encryption, password authentication, policy, and some Web 2.0 solutions have a key role to play in information security. There were some involuntary and deliberate attackers on the Internet . If anyone wants to prevent them, complete planning should be a priority . That's why the information security industry is built with several solutions and long-term experiences , not a simple solution in a day. Li Chang: Convergences will be brought for security and IT industries Li Chang: Convergences will be brought for security and IT industries besides of changes on WiMAX. (Interview audio clip) What's your opinion on issues raised by IS companies during the "2008 Asia-pacific Information Security Forum"? Chang: Our member companies will attempt to change the trend. As for our board committee, because (our member) companies mostly manufacture notebook computers, if a common consensus is established among senior executives in the IT industry , I think the objectives of Taipei Computer Association will be more and more clear . What's your opinion on Hsiao-wen Hung's statement: "When people want to enjoy a new technology, some protection is necessary "? Chang: It's an undoubted truth. When a new technology was under development, companies mostly looked at its possibilities, rather than hidden factors like information or data security. As the Internet just begins to be a mature infrastructure, the public might not know how critical security issues are. As for WiMAX, which you mentioned, even though it's under development, you cannot imagine what will happen in the future. That's the opportunity for security industry.
[ { "type": "R", "before": "Several companies just looked the vision", "after": "Some companies see the possibilities", "start_char_pos": 7, "end_char_pos": 47, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "In the picture", "after": "Pictured", "start_char_pos": 81, "end_char_pos": 95, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "innovated by", "after": "developed by the", "start_char_pos": 106, "end_char_pos": 118, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "bring several conveniences to the modern lifestyle , but it creates", "after": "make our modern lifestyle more convenient, but they also create new", "start_char_pos": 235, "end_char_pos": 302, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "conveniences brought by", "after": "convenience of", "start_char_pos": 374, "end_char_pos": 397, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": ", which is a communication standard is mainly promoted", "after": "is being promoted in Taiwan", "start_char_pos": 463, "end_char_pos": 517, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "in addition to", "after": null, "start_char_pos": 554, "end_char_pos": 568, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": ",", "after": ". It", "start_char_pos": 676, "end_char_pos": 677, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "next generation in", "after": "communication standard for the next generation of", "start_char_pos": 710, "end_char_pos": 728, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "Since several Internet crimes are progressively respected by several information security companies ,", "after": "Information security companies are concerned increasingly by Internet crime, and", "start_char_pos": 745, "end_char_pos": 846, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "disputed on", "after": "question the", "start_char_pos": 860, "end_char_pos": 871, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "problem to bring about", "after": "potential to cause", "start_char_pos": 969, "end_char_pos": 991, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "Although some (security) companies analyzed statistics on potential issues to dispute the future of WiMAX, but the question is how will they present their analysis? With opportunities from", "after": "Some companies have analyzed potential security issues which threaten the success of WiMAX. But what have they found?", "start_char_pos": 1009, "end_char_pos": 1197, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "Taking the opportunity of", "start_char_pos": 1198, "end_char_pos": 1198, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "and successively interviewed", "after": "interviewed both", "start_char_pos": 1270, "end_char_pos": 1298, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "experiences", "after": "experience", "start_char_pos": 1434, "end_char_pos": 1445, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "within a day .", "after": "in a day", "start_char_pos": 1566, "end_char_pos": 1580, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "D", "before": "The improvements of information technology firmly brought on", "after": null, "start_char_pos": 1692, "end_char_pos": 1752, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "Improvements in information technology have certainly brought", "start_char_pos": 1753, "end_char_pos": 1753, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "particularly respected by companies from security industry, As of this , how about the", "after": "of particular concern to companies in the security industry. Since this is so, what is the state of", "start_char_pos": 1797, "end_char_pos": 1883, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "launch of", "start_char_pos": 1919, "end_char_pos": 1919, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "launch,", "after": "...", "start_char_pos": 1965, "end_char_pos": 1972, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "on the", "after": "about", "start_char_pos": 2041, "end_char_pos": 2047, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "but its future was ever disputed", "after": "its future has been questioned", "start_char_pos": 2072, "end_char_pos": 2104, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "from", "after": "in the", "start_char_pos": 2123, "end_char_pos": 2127, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ", its relation is", "after": "are", "start_char_pos": 2188, "end_char_pos": 2205, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ". If something wanted to be linkable", "after": "with two blades. If you want something to be connectible", "start_char_pos": 2220, "end_char_pos": 2256, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "were still some considerations on security", "after": "are security considerations", "start_char_pos": 2273, "end_char_pos": 2315, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "in it", "after": null, "start_char_pos": 2359, "end_char_pos": 2364, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "its safety on your house, you should doseveral hard works", "after": "the security of your building, there are some difficult things you need to do", "start_char_pos": 2389, "end_char_pos": 2446, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "the security is", "after": "security is really", "start_char_pos": 2460, "end_char_pos": 2475, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "trend in reality. As of information securitywhat you mentioned about before", "after": "thing. As for information security, as you mentioned earlier", "start_char_pos": 2488, "end_char_pos": 2563, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "safe", "after": "secure", "start_char_pos": 2768, "end_char_pos": 2772, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "all, but it is inconvenience to lock (them all)", "after": "down, it is inconvenient", "start_char_pos": 2792, "end_char_pos": 2839, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "played key roles", "after": "have a key role to play", "start_char_pos": 2948, "end_char_pos": 2964, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ", if anyone want", "after": ". If anyone wants", "start_char_pos": 3059, "end_char_pos": 3075, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "D", "before": "a", "after": null, "start_char_pos": 3093, "end_char_pos": 3094, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "do priory", "after": "be a priority", "start_char_pos": 3120, "end_char_pos": 3129, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "but", "after": ",", "start_char_pos": 3235, "end_char_pos": 3238, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "style" ] }, { "type": "D", "before": ".", "after": null, "start_char_pos": 3341, "end_char_pos": 3342, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "several disputes from", "after": "issues raised by", "start_char_pos": 3488, "end_char_pos": 3509, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "do attempts on trend-changing. As of", "after": "attempt to change the trend. As for", "start_char_pos": 3615, "end_char_pos": 3651, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "manufactured", "after": "manufacture", "start_char_pos": 3711, "end_char_pos": 3723, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "from several senior officers from IT industrywill be established", "after": "is established among senior executives in the IT industry", "start_char_pos": 3766, "end_char_pos": 3830, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "objects of TCA", "after": "objectives of Taipei Computer Association", "start_char_pos": 3845, "end_char_pos": 3859, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "cleared", "after": "clear", "start_char_pos": 3882, "end_char_pos": 3889, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "word,", "after": "statement:", "start_char_pos": 3932, "end_char_pos": 3937, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "protections should be necessity", "after": "protection is necessary", "start_char_pos": 3988, "end_char_pos": 4019, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "on its vision rather than its", "after": "at its possibilities, rather than", "start_char_pos": 4124, "end_char_pos": 4153, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "began in", "after": "begins to be", "start_char_pos": 4225, "end_char_pos": 4233, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "its critical issue on security factor. As of WiMAX", "after": "how critical security issues are. As for WiMAX,", "start_char_pos": 4285, "end_char_pos": 4335, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "but you will not imagine what it", "after": "you cannot imagine what", "start_char_pos": 4393, "end_char_pos": 4425, "major_intent": "style", "raw_intents": [ "fluency", "style", "style" ] } ]
[ 0, 80, 209, 343, 456, 744, 890, 1008, 1173, 1422, 1506, 1668, 1905, 2158, 2221, 2317, 2366, 2448, 2505, 2744, 2841, 2989, 3131, 3270, 3342, 3581, 3645, 3891, 4054, 4203, 4323, 4452 ]
null
105178
1
OSC was the favourite to take the gold this weekend, as they had already won league play with a comfortable lead over Kbenhavn Squash Klub (KSK), and therefore had home court and could pick the opponent in the first semifinal . OSC chose to play byhj Squash Klub (SK) in the first semi final, and beat them with ease 6-1, only loosing the women's second to Ditte Nielsen (SK). During the other semi, the favoured professionals from KSK, ran in to serious problems, injured star player Alex Stait was not able to play for KSK giving the young Herlev team a chance in this semi. After 5 matches and down by two matches, Mikkel Kragholm (Herlev) and Thomas Pilak (Herlev) became double match winners when they each beat their opponent by 3-0 and 3-1, winning the match 4-3 for Herlev. The final was, however, dominated by OSC, and Herlev wasn't ever in the match. Only Danish individual champion Morten Srensen was able to win for Herlev. Rest of the matches was lost for Herlev . OSC's players proved to be to strong for last years bronze winners from Herlev. OSC won the Danish championship 6-1 in the final at Squash Center Danmark in Odense.
OSC was the favourite to take the gold this weekend, as they had already won league play with a comfortable lead over Kbenhavn Squash Klub (KSK), and therefore had home court and could pick their opponent in the first semi-final . OSC chose to play byhj Squash Klub (SK) in the first semi final, and beat them with ease 6-1, only loosing the women's second to Ditte Nielsen (SK). During the other semi, the favoured professionals from KSK, ran in to serious problems, as injured star player Alex Stait was not able to play for KSK giving the young Herlev team a chance in this semi. After 5 matches and down by two matches, Mikkel Kragholm (Herlev) and Thomas Pilak (Herlev) became double match winners when they each beat their opponents by 3-0 and 3-1, winning the match 4-3 for Herlev. The final was, however, dominated by OSC, and Herlev wasn't ever in the match. Only Danish individual champion Morten Srensen was able to win for Herlev. Herlev lost the rest of the matches . OSC's players proved to be to strong for last years bronze winners from Herlev. OSC won the Danish championship 6-1 in the final at Squash Center Danmark in Odense.
[ { "type": "R", "before": "the", "after": "their", "start_char_pos": 190, "end_char_pos": 193, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "semifinal", "after": "semi-final", "start_char_pos": 216, "end_char_pos": 225, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "as", "start_char_pos": 465, "end_char_pos": 465, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "opponent", "after": "opponents", "start_char_pos": 724, "end_char_pos": 732, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": "Rest", "after": "Herlev lost the rest", "start_char_pos": 937, "end_char_pos": 941, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "D", "before": "was lost for Herlev", "after": null, "start_char_pos": 957, "end_char_pos": 976, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] } ]
[ 0, 227, 376, 577, 782, 861, 936, 1058 ]
news
106517
1
The Oskarshamn Nuclear Power Plant. Two Swedish men that were arrested for being suspected of sabotaging the Oskarshamn Nuclear Power Plant in Sweden, have been released from police custody. Both men were arrested on Wednesday after traces of Acetone Peroxide, or TATP were found on a plastic bag. TATP is also URLanic peroxide which is sometimes used in making household cleaning chemicals. "Both men have been cooperative but they deny any wrongdoing and waived the right to legal counsel. There was no legal ground to hold them any longer," said Swedish police in a statement to the press who also said that an investigation is ongoing. The unnamed men in their 40's and 50's were arrested after security officials at the plant discovered traces of the explosive substance on a plastic bag inside a bag one of the men were carrying. Both men were contractors hired to do welding work in the plant which is owned and operated by Oskarshamnsverkets Kraftgrupp OKG. Authorities were called to the plant on Wednesday along with the bomb squad, who sealed off parts of the plant when they detected the explosive material. Security detected the material during what was described by CNN as a "routine" security check . Police believe it was on one of the man's hands when it rubbed off onto the bag, but no bomb was found on the premises after an extensive search.
The Oskarshamn Nuclear Power Plant. Two Swedish men that were arrested for being suspected of sabotaging the Oskarshamn Nuclear Power Plant in Sweden, have been released from police custody. Both men were arrested on Wednesday after traces of Acetone Peroxide, or TATP , were found on a plastic bag. TATP is also URLanic peroxide which is sometimes used in making household cleaning chemicals. "Both men have been cooperative but they deny any wrongdoing and waived the right to legal counsel. There was no legal ground to hold them any longer," said Swedish police in a statement to the press . He also noted that an investigation is ongoing. The unnamed men , in their 40's and 50's , were arrested after security officials at the plant discovered traces of the explosive substance on a plastic bag inside a bag one of the men was carrying. Both men were contractors hired to do welding work in the plant which is owned and operated by Oskarshamnsverkets Kraftgrupp OKG. Security detected the material on Wednesday during what was described by CNN as a "routine" security check. Authorities were called to the plant along with the bomb squad, who sealed off parts of the plant . Police believe it was on one of the man's hands when it rubbed off onto the bag, but no bomb was found on the premises after an extensive search.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 269, "end_char_pos": 269, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "who also said", "after": ". He also noted", "start_char_pos": 593, "end_char_pos": 606, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 657, "end_char_pos": 657, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 681, "end_char_pos": 681, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "were", "after": "was", "start_char_pos": 824, "end_char_pos": 828, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "Security detected the material on Wednesday during what was described by CNN as a \"routine\" security check.", "start_char_pos": 969, "end_char_pos": 969, "major_intent": "coherence", "raw_intents": [ "meaning-changed", "coherence", "coherence" ] }, { "type": "D", "before": "on Wednesday", "after": null, "start_char_pos": 1007, "end_char_pos": 1019, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "D", "before": "when they detected the explosive material. Security detected the material during what was described by CNN as a \"routine\" security check", "after": null, "start_char_pos": 1081, "end_char_pos": 1217, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] } ]
[ 0, 35, 190, 298, 392, 492, 640, 838, 968, 1123 ]
news
106849
1
A New Jersey state appeals court ruled Tuesday that fifteen Spanish citizens can sue over claims of health issues related to asbestos exposure while working aboard United States Navy and Coast Guard ships docked at United States-Spanish military installations. The defendant, Ohio-based company Owens Illinois, Inc., had sought a trial in a Spanish court, an opinion which was shared by the Superior Court which heard the case . The three-judge panel appellate court overturned the decision of the Superior Court in a 3-0 ruling. Asbestos fibers The Spanish citizens worked aboard U.S. ships between 1950 and 1998, and claim they were exposed to asbestos dust and fibers from piping insulation produced by Owens-Illinois. The piping insulation was originally manufactured in Sayreville, Middlesex County, and Berlin, Camden County New Jersey. The workers say they suffer from diseases related to asbestos such as asbestosis. Owens-Illinois has headquarters in Toledo, Ohio and is a Delaware corporation. The New Jersey appellate panel ruled that the Superior Court judge did not consider where the plaintiffs wanted their case heard, and also held that the U.S. ships are considered U.S. territory and thus the workers' claimed health issues did not begin on Spanish land. Attorneys for Owens-Illinois argued that U.S. ships when docked are subject to the law of Spain, and that the case should be heard in Spanish courts. The court's opinion was written by Judge Anthony Parrillo, who wrote that : "In sum, we conclude that defendant has failed to carry its burden to demonstrate that Spain is an available adequate forum to adjudicate the parties' dispute and therefore the motion to dismiss on forum non conveniens grounds should have been denied without consideration of public- and private-interest factors." The decision reversed the ruling of the Superior Court and remanded the suit back to that court for trial. This is not the only asbestos-related lawsuit in which Owens-Illinois is cited as a defendant. The company is also a defendant (among other defendants) in asbestos cases filed in Ohio and other states. In an April 30 press release the company reported that asbestos-related payments had decreased slightly, stating: "Asbestos-related cash payments during the first quarter of 2008 were $40.2 million, down slightly from $41.0 million during the first quarter of 2007." According to the press release the company had 14,000 pending asbestos-related lawsuits as of March 31, 2008. In its balance sheet for the first quarter of 2008 the company reported US$835 million in asbestos-related liabilities. In a May 2 earnings call with financial analysts, Owens-Illinois Chief Financial Officer Edward C. White addressed asbestos-related expenses. "Only a small portion of our first quarter asbestos payments related to the company's proactive legal strategy to reduce risk and accelerate asbestos resolution on favorable terms. Nevertheless, this strategy continues and additional expected spending is reflected both on the current liability portion of our balance sheet as well as in our full-year cash flow projection," said White. He went on to note that: "We exited the business 50 years ago and have been dealing with the legal issues for almost 30 years. For OI, this remains a limited declining liability, which we will continue to manage in a conscientious and responsible manner." Asbestosis is a disease resulting from asbestos exposure which causes lung scarring and can lead to lung cancer. Exposure to asbestos can also lead to a more serious condition known as mesothelioma. Mesothelioma is a cancer which develops in the sac surrounding the lungs and chest cavity, abdominal cavity, or the sac surrounding the heart. Patients with malignant mesothelioma generally do not have positive outcomes, and once diagnosed have six months to a year to live.
A New Jersey state appeals court ruled Tuesday that fifteen Spanish citizens can sue over claims of health issues related to asbestos exposure while working aboard United States Navy and Coast Guard ships docked at United States-Spanish military installations. The defendant, Ohio-based company Owens Illinois, Inc., had sought a trial in a Spanish court, an opinion which was shared by the Superior Court that had heard the case earlier . The three-judge panel appellate court overturned the decision of the Superior Court in a 3-0 ruling. Asbestos fibers The Spanish citizens worked aboard U.S. ships between 1950 and 1998, and claim that they were exposed to asbestos dust and fibers from piping insulation produced by Owens-Illinois. The piping insulation was originally manufactured in Sayreville, Middlesex County, and Berlin, Camden County , New Jersey. The workers say they suffer from diseases related to asbestos such as asbestosis. Owens-Illinois has headquarters in Toledo, Ohio and is a Delaware corporation. The New Jersey appellate panel ruled that the Superior Court judge did not consider where the plaintiffs wanted their case heard, and also held that the U.S. ships are considered U.S. territory and thus the workers' claimed health issues did not begin on Spanish land. Attorneys for Owens-Illinois argued that U.S. ships , when docked, are subject to the law of Spain, and so the case should be heard in Spanish courts. The court's opinion , written by Judge Anthony Parrillo, explained the ruling : "In sum, we conclude that defendant has failed to carry its burden to demonstrate that Spain is an available adequate forum to adjudicate the parties' dispute and therefore the motion to dismiss on forum non conveniens grounds should have been denied without consideration of public- and private-interest factors." The decision reversed the ruling of the Superior Court and remanded the suit back to that court for trial. This is not the only asbestos-related lawsuit in which Owens-Illinois is cited as a defendant. The company is also a defendant (among other defendants) in asbestos cases filed in Ohio and other states. In an April 30 press release the company reported that asbestos-related payments had decreased slightly, stating: "Asbestos-related cash payments during the first quarter of 2008 were $40.2 million, down slightly from $41.0 million during the first quarter of 2007." According to the press release , the company had 14,000 pending asbestos-related lawsuits as of March 31, 2008. In its balance sheet for the first quarter of 2008 , the company reported US$835 million in asbestos-related liabilities. In a May 2 earnings call with financial analysts, Owens-Illinois Chief Financial Officer Edward C. White addressed asbestos-related expenses. "Only a small portion of our first quarter asbestos payments related to the company's proactive legal strategy to reduce risk and accelerate asbestos resolution on favorable terms. Nevertheless, this strategy continues and additional expected spending is reflected both on the current liability portion of our balance sheet as well as in our full-year cash flow projection," said White. "We exited the business 50 years ago and have been dealing with the legal issues for almost 30 years. For OI, this remains a limited declining liability, which we will continue to manage in a conscientious and responsible manner." Asbestosis is a disease resulting from asbestos exposure which causes lung scarring and can lead to lung cancer. Exposure to asbestos can also lead to a more serious condition known as mesothelioma. Mesothelioma is a cancer which develops in the sac surrounding the lungs and chest cavity, abdominal cavity, or the sac surrounding the heart. Patients with malignant mesothelioma generally do not have positive outcomes, and once diagnosed typically have six months to a year to live.
[ { "type": "R", "before": "which", "after": "that had", "start_char_pos": 406, "end_char_pos": 411, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "earlier", "start_char_pos": 427, "end_char_pos": 427, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "that", "start_char_pos": 626, "end_char_pos": 626, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 833, "end_char_pos": 833, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "when docked", "after": ", when docked,", "start_char_pos": 1328, "end_char_pos": 1339, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "that", "after": "so", "start_char_pos": 1377, "end_char_pos": 1381, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "fluency" ] }, { "type": "R", "before": "was", "after": ",", "start_char_pos": 1446, "end_char_pos": 1449, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "fluency" ] }, { "type": "R", "before": "who wrote that", "after": "explained the ruling", "start_char_pos": 1485, "end_char_pos": 1499, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 2424, "end_char_pos": 2424, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 2555, "end_char_pos": 2555, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "D", "before": "He went on to note that:", "after": null, "start_char_pos": 3154, "end_char_pos": 3178, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "A", "before": null, "after": "typically", "start_char_pos": 3849, "end_char_pos": 3849, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 260, 429, 530, 723, 845, 927, 1006, 1275, 1425, 1816, 1923, 2018, 2125, 2503, 2624, 2723, 2766, 2947, 3153, 3280, 3409, 3522, 3608, 3751 ]
news
108578
1
The State of Florida entered an agreement on June 24, 2008 to purchase U.S. Sugar , the largest manufacturer of cane sugar in the U.S. in an effort to restore the Everglades. Terms of the agreement stated that the purchase would be completed within 75 days, but that U.S. Sugar would be able to operate for another six years. After that, the State of Florida will retain ownership of U.S. Sugar's manufacturing facilities in Clewiston, Florida, and 187,000 acres of fields that have grown sugarcane since the 1960s would be allowed to return to their natural state. U.S. Sugar and the State of Florida have been in legal disputes since the Everglades Forever Act was passed in 1994. The act was an attempt to clean up water tainted with phosphorus runoff leaving sugarcane fields and pumped into the Everglades. Phosphorus is a fertilizer that alters Everglades ecosystems, allowing invasive and exotic cattails to replace sawgrass when levels rise above 50 parts per billion . Tests in the 1980s in the Loxahatchee National Wildlife Refuge directly south of sugarcane fields indicated water had 500 ppb of phosphorus.
The U.S. state of Florida entered an agreement on June 24, 2008 to purchase the U.S. Sugar Corporation , the largest manufacturer of cane sugar in the U.S. , in an effort to restore the Everglades. Terms of the agreement stated that the purchase would be completed within 75 days, but that U.S. Sugar would be able to operate for another six years. After that, the state of Florida will retain ownership of U.S. Sugar's manufacturing facilities in Clewiston, Florida, and 187,000 acres of fields that have grown sugarcane since the 1960s will be allowed to return to their natural state. U.S. Sugar and the state of Florida have been in legal disputes since the Everglades Forever Act was passed in 1994. The act was an attempt to clean up water tainted with phosphorus , a component of fertilizer, which runs off the sugarcane fields and into the Everglades. Phosphorus alters Everglades ecosystems, allowing invasive and exotic cattails to replace sawgrass when levels rise above 50 parts per billion (ppb) . Tests in the 1980s in the Loxahatchee National Wildlife Refuge directly south of sugarcane fields indicated that the water had 500 ppb of phosphorus.
[ { "type": "R", "before": "State", "after": "U.S. state", "start_char_pos": 4, "end_char_pos": 9, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 71, "end_char_pos": 71, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "A", "before": null, "after": "Corporation", "start_char_pos": 83, "end_char_pos": 83, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 137, "end_char_pos": 137, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "State", "after": "state", "start_char_pos": 345, "end_char_pos": 350, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "would", "after": "will", "start_char_pos": 518, "end_char_pos": 523, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": "State", "after": "state", "start_char_pos": 588, "end_char_pos": 593, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "runoff leaving", "after": ", a component of fertilizer, which runs off the", "start_char_pos": 751, "end_char_pos": 765, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "pumped", "after": null, "start_char_pos": 787, "end_char_pos": 793, "major_intent": "coherence", "raw_intents": [ "style", "coherence", "coherence" ] }, { "type": "R", "before": "Phosphorus is a fertilizer that", "after": "Phosphorus", "start_char_pos": 815, "end_char_pos": 846, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "A", "before": null, "after": "(ppb)", "start_char_pos": 979, "end_char_pos": 979, "major_intent": "meaning-changed", "raw_intents": [ "others", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "that the", "start_char_pos": 1090, "end_char_pos": 1090, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 177, 328, 568, 685, 814, 981 ]
null
108633
1
On Wednesday, a United States federal appeals court upheld convictions of fraud and obstruction of justice against media mogul Conrad Black, along with three other executives from his former press corporation Hollinger International. Black, a Canadian-born who holds the title Baron Black of Crossharbour in the United Kingdom, was found guilty in July 2007 of three counts of wire and mail fraud for giving himself and others millions of dollars in illegal bonuses taken from Hollinger holdings, and for obstruction of justice based on surveillance footage of him moving 13 boxes of documents out of his Toronto office to his home the day after he was informed that he was being investigated. He was sentenced, and from March this year began a 6 1/ 2 year prison term. Peter Atkinson and John Boultbee were convicted on the same fraud charges, and were sentenced to terms of 24 and 27 months respectively, while Mark Kipnis, who was implicated in the fraud, was given probation with 6 months of home detention. The appeals panel from the United States Court of Appeals for the Seventh Circuit, who heard the defence attorenys ' oral arguments, presented its opinion in a 16-page document in which they rejected arguments against both the fraud and obstruction charges. They noted that while the defence had presented a "no harm-no foul argument", in which they argued that they were owed the money appropriated, "such arguments usually fare badly in criminal cases". They also dismissed claims that instructions given to the jury by the judge had been incomplete, or unclear, stating that "the defendants proposed a misleading statement as an alternative", and that in such situations the judge is allowed to stay with the original instruction, and in particular is not required "that a submitted charge be technically perfect to alert the court to the need for a particular charge". The appeals judges also discounted arguments from the defence regarding whether the defendants were all aware of the illegality of the transactions, making reference to an "ostrich argument" ( based on the urban legend that ostriches sensing danger stick their heads in the sand ), in that choosing not to investigate the suspicious nature of the payments was equivalent to accepting their illegality. An argument from Black's defence regarding the obstruction charge, that in moving the documents he was not in fact escaping scrutiny, was similarly rejected, as " All that needed to be proved is that the document was concealed in order to make it unavailable in an official proceeding," according to the ruling as written by Judge Richard Posner.
On Wednesday, a United States federal appeals court upheld convictions of fraud and obstruction of justice against media mogul Conrad Black, along with three other executives from his former press corporation , Hollinger International. Black, born in Canada and bearer of the title Baron Black of Crossharbour in the United Kingdom, was found guilty in July 2007 of three counts of wire and mail fraud for giving himself and others millions of dollars in illegal bonuses taken from Hollinger holdings, and for obstruction of justice based on surveillance footage of his moving 13 boxes of documents out of his Toronto office to his home the day after he was informed that he was being investigated. He was sentenced, and from March this year he began a 6 1/ 2-year prison term. Peter Atkinson and John Boultbee were convicted on the same fraud charges, and were sentenced to terms of 24 and 27 months respectively, while Mark Kipnis, who was implicated in the fraud, was given probation with 6 months of home detention. The appeals panel from the United States Court of Appeals for the Seventh Circuit, who heard the defence attorneys ' oral arguments, presented its opinion in a 16-page document in which they rejected arguments against the fraud and obstruction charges. They noted that while the defence had presented a "no harm-no foul argument", in which it was argued that the accused were owed the money appropriated, "such arguments usually fare badly in criminal cases". They also dismissed claims that instructions given to the jury by the judge had been incomplete, or unclear, stating that "the defendants proposed a misleading statement as an alternative", and that in such situations the judge is allowed to stay with the original instruction, and in particular is not required "that a submitted charge be technically perfect to alert the court to the need for a particular charge". The appeals judges also discounted arguments from the defence regarding the defendants being unaware of the illegality of the transactions, making reference to an "ostrich argument" , based on the urban legend that ostriches sensing danger stick their heads in the sand . It was noted that choosing not to investigate the suspicious nature of the payments was equivalent to accepting their illegality. An argument from Black's defence regarding the obstruction charge, that in moving the documents he was not in fact escaping scrutiny, was similarly rejected, as " all that needed to be proved is that the document was concealed in order to make it unavailable in an official proceeding," according to the ruling as written by Judge Richard Posner.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 209, "end_char_pos": 209, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "a Canadian-born who holds", "after": "born in Canada and bearer of", "start_char_pos": 242, "end_char_pos": 267, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "him", "after": "his", "start_char_pos": 562, "end_char_pos": 565, "major_intent": "fluency", "raw_intents": [ "others", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "he", "start_char_pos": 738, "end_char_pos": 738, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "R", "before": "2 year", "after": "2-year", "start_char_pos": 752, "end_char_pos": 758, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "attorenys", "after": "attorneys", "start_char_pos": 1119, "end_char_pos": 1128, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "both", "after": null, "start_char_pos": 1232, "end_char_pos": 1236, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "they argued that they", "after": "it was argued that the accused", "start_char_pos": 1359, "end_char_pos": 1380, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "whether the defendants were all aware", "after": "the defendants being unaware", "start_char_pos": 1959, "end_char_pos": 1996, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "(", "after": ",", "start_char_pos": 2078, "end_char_pos": 2079, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "), in", "after": ". It was noted", "start_char_pos": 2166, "end_char_pos": 2171, "major_intent": "coherence", "raw_intents": [ "coherence", "meaning-changed", "coherence" ] }, { "type": "R", "before": "All", "after": "all", "start_char_pos": 2452, "end_char_pos": 2455, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] } ]
[ 0, 234, 694, 771, 1013, 1271, 1469, 1886, 2288 ]
news
108680
1
The man who took a gut feel of seeing personal computers in every household three decades and each will require a stable operating system will now pursue philanthropic activities under the Bill and Melinda Gates Foundation. As Gates leaves day-to-day operations, two people will assume the his two vital duties. Craig Mundie will handle the company's long-term planning while Ray Ozzie will handle the operations. Bill Gates founded Microsoft in 1975 and from then has dominated the market for operating systems in personal computers. Gates managed close to 78,000 employees in 103 countries all over the world.
The man who took a gut feeling of seeing personal computers in every household , each requiring a stable operating system will now pursue philanthropic activities under the Bill and Melinda Gates Foundation. As Gates leaves day-to-day operations, two people will assume his two vital duties. Craig Mundie will handle the company's long-term planning while Ray Ozzie will handle the operations. Bill Gates founded Microsoft in 1975 which has since dominated the market for operating systems in personal computers. Gates managed close to 78,000 employees in 103 countries all over the world.
[ { "type": "R", "before": "feel", "after": "feeling", "start_char_pos": 23, "end_char_pos": 27, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "three decades and each will require", "after": ", each requiring", "start_char_pos": 76, "end_char_pos": 111, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "D", "before": "the", "after": null, "start_char_pos": 286, "end_char_pos": 289, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "and from then has", "after": "which has since", "start_char_pos": 451, "end_char_pos": 468, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] } ]
[ 0, 223, 311, 413, 534 ]
news
108948
1
A file photo of a typical 767 fitted out for passenger flights A Boeing 767 cargo jetliner (tail number: N799AX) has been seriously damaged by a fire that broke out Saturday evening shortly after 22:00 PDT (UTC-7). It took two hours to extuiguish the fire on board the Airborne Express aircraft, which was parked at a mail proscessing area of San Francisco International Airport in California, United States. Roads around the airport were closed for ten minutes while crews responded, and 100 people were evacuate from a nearby building. The fire was described as intense and producing thick amounts of black smoke. Airport Duty Manager Lilly Wang said of the damage "You can actually see through the top of the aircraft. It spread all the way through."
A file photo of a typical 767 fitted out for passenger flights A Boeing 767 cargo jetliner (tail number: N799AX) has been seriously damaged by a fire that broke out Saturday evening shortly after 22:00 PDT (UTC-7). It took two hours to extinguish the fire on board the Airborne Express aircraft, which was parked at a mail processing area of San Francisco International Airport in California, United States. Roads around the airport were closed for ten minutes while crews responded, and 100 people were evacuated from a nearby building. The fire was described as intense and producing thick amounts of black smoke. Airport Duty Manager Lilly Wang said of the damage , "You can actually see through the top of the aircraft. It spread all the way through."
[ { "type": "R", "before": "extuiguish", "after": "extinguish", "start_char_pos": 236, "end_char_pos": 246, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "proscessing", "after": "processing", "start_char_pos": 323, "end_char_pos": 334, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "evacuate", "after": "evacuated", "start_char_pos": 505, "end_char_pos": 513, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 667, "end_char_pos": 667, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 214, 408, 537, 615, 722 ]
news
109525
1
The driver of the Ford, who was killed in the incident, was a 23 year old male. In addition to the 23 year old male, an 18 year old man, a 19 year old woman and an 18 year old woman were also killed in the incident. The age of the other two people is not yet known.
The driver of the Ford, who was killed in the incident, was a 23-year-old male. In addition to the 23-year-old male, an 18-year-old man, a 19-year old woman and an 18-year-old woman were also killed in the incident. The age of the other two people is not yet known.
[ { "type": "R", "before": "23 year old", "after": "23-year-old", "start_char_pos": 62, "end_char_pos": 73, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "23 year old", "after": "23-year-old", "start_char_pos": 99, "end_char_pos": 110, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "18 year old", "after": "18-year-old", "start_char_pos": 120, "end_char_pos": 131, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "19 year", "after": "19-year", "start_char_pos": 139, "end_char_pos": 146, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "18 year old", "after": "18-year-old", "start_char_pos": 164, "end_char_pos": 175, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 79, 215 ]
news
110289
2
150px|A puffin on Farne Islands in 2006. Puffin numbers are declining on the Farne Islands, which has United Kingdom's largest colony . During the past five years, numbers on the islands have decreased by a third. Experts had expected numbers to rise this year, but believe the declineis due the birds being unable to find food . Numbers of puffins have also been declining on the Isle of May, 100 miles north. Professor Mike Harris, Emeritus Research Fellow at the Centre for Ecology Hydrology, said that the birds spend eight months at sea, and some don't return, which may be the cause of the lowering population. Puffins, like many auks feed on fish and zooplankton , and it is not known if man-made causes, such as over-fishing or climate change have lowered the amount of food available to puffins, causing them to starve whilst out at sea.
150px|A puffin on Farne Islands in 2006. Puffin numbers are declining on the Farne Islands, where the United Kingdom's largest colony resides . During the past five years, the puffin population on the islands has decreased by a third. Experts had expected numbers to rise this year, but now with news of the decline, are citing an inability to find food as the probable cause . Numbers of puffins have also been declining on the Isle of May, 100 miles north. Professor Mike Harris, Emeritus Research Fellow at the Centre for Ecology and Hydrology, said that the birds spend eight months at sea, and some do not return, which may be the cause of the lowering population. Puffins, like many auks , feed on fish and zooplankton . It is not known if man-made causes, such as over-fishing or climate change , have lowered the amount of food available to puffins, causing them to starve while at sea.
[ { "type": "R", "before": "which has", "after": "where the", "start_char_pos": 92, "end_char_pos": 101, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "resides", "start_char_pos": 134, "end_char_pos": 134, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "numbers", "after": "the puffin population", "start_char_pos": 165, "end_char_pos": 172, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "have", "after": "has", "start_char_pos": 188, "end_char_pos": 192, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "believe the declineis due the birds being unable", "after": "now with news of the decline, are citing an inability", "start_char_pos": 267, "end_char_pos": 315, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "as the probable cause", "start_char_pos": 329, "end_char_pos": 329, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "and", "start_char_pos": 487, "end_char_pos": 487, "major_intent": "coherence", "raw_intents": [ "fluency", "coherence", "coherence" ] }, { "type": "R", "before": "don't", "after": "do not", "start_char_pos": 555, "end_char_pos": 560, "major_intent": "fluency", "raw_intents": [ "style", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 644, "end_char_pos": 644, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": ", and it", "after": ". It", "start_char_pos": 674, "end_char_pos": 682, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 755, "end_char_pos": 755, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "R", "before": "whilst out", "after": "while", "start_char_pos": 833, "end_char_pos": 843, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] } ]
[ 0, 40, 136, 214, 331, 412, 619 ]
news
110378
1
The report created by the AASHTO titled "Bridging the Gap: Restoring and Rebuilding the Nation's Bridges" stated that nearly every one in five bridges is 50 years old or older, and that of the 600,000 U.S. bridges that nearly 152,000, or one in four, need significant repair. "Almost one in four bridges, while safe to travel, is either structurally deficient, in need of repair," according to the report . Also that the average age of American bridges is 43 years old. The report was released just prior to the August 1, 2007 anniversary of the I-35W bridge collapse, which resulted in the death of thirteen people.
The report created by the AASHTO titled "Bridging the Gap: Restoring and Rebuilding the Nation's Bridges" stated that nearly every one in five bridges is 50 years old or older, and that of the 600,000 U.S. bridges that nearly 152,000, or one in four, need significant repair. "Almost one in four bridges, while safe to travel, is either structurally deficient, in need of repair," according to the report , which also says that the average age of American bridges is 43 years . The report was released just days prior to the one-year anniversary of the August 1, 2007 I-35W bridge collapse, which resulted in the death of thirteen people.
[ { "type": "R", "before": ". Also", "after": ", which also says", "start_char_pos": 405, "end_char_pos": 411, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "old.", "after": ".", "start_char_pos": 465, "end_char_pos": 469, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "A", "before": null, "after": "days", "start_char_pos": 499, "end_char_pos": 499, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "one-year anniversary of the", "start_char_pos": 513, "end_char_pos": 513, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "anniversary of the", "after": null, "start_char_pos": 529, "end_char_pos": 547, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] } ]
[ 0, 275, 406, 469 ]
news
110422
1
Kelsey Grammer in 2006 American actor Kelsey Grammer has checked into a hospital in New York after feeling faint. It is the second time the actor has checked into a hospital after he suffered a heart attack in Hawaii two months ago. His current condition is not life threating . Grammer blamed the heart attack , which he suffered while paddle-boating with his wife in June, on the pressure from his cancelled sitcom Back To You . Grammer is famous for appearing on television sitcoms such as Frasier and Cheers. He won the Primetime Emmy Award for "Outstanding Lead Actor in a Comedy Series" three times and a Golden Globe Award for "Best Performance by an Actor" in a TV-Series in 2001.
Kelsey Grammer in 2006 American actor Kelsey Grammer has checked into a hospital in New York after feeling faint. It is the second time the actor has checked into a hospital after he suffered a heart attack in Hawaii two months ago. His current condition is not life-threating . Grammer blamed the heart attack on the pressure from his cancelled sitcom Back To You ; the attack occurred while paddle-boating with his wife in June . Grammer is best known for his roles on television sitcoms Frasier and Cheers. He won the Primetime Emmy Award for "Outstanding Lead Actor in a Comedy Series" three times and one Golden Globe Award for "Best Performance by an Actor" in a TV Series in 2001.
[ { "type": "R", "before": "life threating", "after": "life-threating", "start_char_pos": 262, "end_char_pos": 276, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": ", which he suffered while paddle-boating with his wife in June,", "after": null, "start_char_pos": 311, "end_char_pos": 374, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "A", "before": null, "after": "; the attack occurred while paddle-boating with his wife in June", "start_char_pos": 429, "end_char_pos": 429, "major_intent": "coherence", "raw_intents": [ "coherence", "meaning-changed", "coherence" ] }, { "type": "R", "before": "famous for appearing", "after": "best known for his roles", "start_char_pos": 443, "end_char_pos": 463, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "such as", "after": null, "start_char_pos": 486, "end_char_pos": 493, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "a", "after": "one", "start_char_pos": 610, "end_char_pos": 611, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "TV-Series", "after": "TV Series", "start_char_pos": 671, "end_char_pos": 680, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 113, 232, 513 ]
news
111003
1
200px|A photomicrograph of Bacillus anthracis bacteria The United States Federal Bureau of Investigation (FBI) has alleged that Bruce Ivins, a government bioweapons scientist who committed suicide July 29, 2008 , was the sole person responsible for the biological terrorism attacks in the USA in 2001 which came shortly after the September 11, 2001 attacks. The FBI's conclusions, released in a press briefing by US attorney for the District of Columbia Jeffrey Taylor, has few facts but relies primarily on circumstantial evidence, reports the BBC's Kevin Connolly in Washington. "Based upon the totality of the evidence we had gathered against him, we are confident that Dr. Ivins was the only person responsible for these attacks , " said Taylor. Primary amongst them is the fact that all attacks used spores from flask RMR1029, which the government says was solely created and controlled by Dr. Ivins. The researcher was working on a vaccine for anthrax in 2001, and had been immunized against the disease. Ivin died on July 29, 2008 shortly after being informed of the charges being brought against him.
200px|A photomicrograph of Bacillus anthracis bacteria The United States Federal Bureau of Investigation (FBI) has alleged that Bruce Ivins, a government bioweapons scientist , was the sole person responsible for the biological terrorism attacks in the USA in 2001 which came shortly after the September 11, 2001 attacks. Ivins committed suicide July 29, 2008. The FBI's conclusions, released in a press briefing by US attorney for the District of Columbia Jeffrey Taylor, has few facts but relies primarily on circumstantial evidence, reports the BBC's Kevin Connolly in Washington. But Taylor's statement asserts confidence in the FBI's findings: "Based upon the totality of the evidence we had gathered against him, we are confident that Dr. Ivins was the only person responsible for these attacks . " Primary amongst the evidence is the fact that all attacks used spores from flask RMR1029, which the government says was solely created and controlled by Dr. Ivins. The researcher was working on a vaccine for anthrax in 2001, and had been immunized against the disease. Ivins died on July 29, 2008 shortly after being informed of the charges being brought against him.
[ { "type": "D", "before": "who committed suicide July 29, 2008", "after": null, "start_char_pos": 175, "end_char_pos": 210, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "A", "before": null, "after": "Ivins committed suicide July 29, 2008.", "start_char_pos": 358, "end_char_pos": 358, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "But Taylor's statement asserts confidence in the FBI's findings:", "start_char_pos": 582, "end_char_pos": 582, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ",", "after": ".", "start_char_pos": 735, "end_char_pos": 736, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "said Taylor.", "after": null, "start_char_pos": 739, "end_char_pos": 751, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "them", "after": "the evidence", "start_char_pos": 768, "end_char_pos": 772, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "Ivin", "after": "Ivins", "start_char_pos": 1013, "end_char_pos": 1017, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 357, 581, 678, 751, 900, 1012 ]
news
111017
1
200px|Map of the Menominee River. Scott J. Johnson, 38, of Kingsford, Michigan was charged on Wednesday with killing three teenagers in a shooting on the Menominee River near Niagara, Wisconsin. Johnson was charged with first-degree intentional homicide. Authorities say he went to the popular bathing spot near the East Kingsford Railroad Bridge on July 31 and opened fire with a military-style rifle, killing three and injuring one. The Associated Press reports that he faces life in prison without parole if convicted . Wisconsin does not have the death penalty. However, as one of the bodies was found on the Michigan side of the border, he could potentially face charges there, or perhaps even in federal court. He is being represented by a public defender, Len Kachinsky. Local teens said Johnson often was seen at the site, but mostly kept to himself. On July 31, the Associated Press reports he emerged from the woods wearing camouflage clothing and opened fire without saying a word. Johnson fled and surrendered the next day after a multi-agency manhunt. Johnson had been accused of a sexual assault, and his mother speculated he may have panicked after hearing police wanted to speak with him. The Detroit Free Press reports that in a confession to police, Johnson said the shooting was set in motion when Johnson lured the woman to the bridge the day before and sexually assaulted her. Worried that he would be arrested, Johnson returned to the woods, planning to ambush law enforcement officers coming to find him. After he spent a night in the woods and no officers came, he returned home at 10 a.m. on July 31, asked his mother if police had been looking for him, then left at 3:00 p.m. He says he then got his weapons and returned to the woods. Johnson said he planned to wait for the teens to reach the Michigan side of the river so he could kill them all and use them as "bait" to lure police into his ambush. However, Spigarelli and Pohlson unexpectedly approached his position, and he said he felt trapped and he opened fire. Spigarelli and Pohlson were both shot in the head and died instantly. Johnson then reloaded and fired across the river, killing Bryan Mort.
200px|Map of the Menominee River. Scott J. Johnson, 38, of Kingsford, Michigan was charged on Wednesday with killing three teenagers in a shooting on the Menominee River near Niagara, Wisconsin. The charges consist of three counts of first-degree intentional homicide. Authorities say that Johnson went to the popular bathing spot near the East Kingsford Railroad Bridge on July 31 and opened fire with a military-style rifle, killing three and injuring one. The Associated Press reports that he faces life in prison without parole if convicted ; Wisconsin does not have the death penalty. However, as one of the bodies was found on the Michigan side of the border, he could potentially face charges there, or perhaps even in federal court. He is being represented by a public defender, Len Kachinsky. Local teens said Johnson was often seen at the site, but mostly kept to himself. On July 31, the Associated Press reports , he emerged from the woods wearing camouflage clothing and opened fire without saying a word. Johnson fled and surrendered the next day after a multi-agency manhunt. Johnson had been accused of a sexual assault, and his mother speculated he may have panicked after hearing police wanted to speak with him. The Detroit Free Press reports that , in a confession to police, Johnson said the shooting was set in motion when he lured a woman to the bridge the day before and sexually assaulted her. Worried that he would be arrested, Johnson returned to the woods, planning to ambush law enforcement officers coming to find him. After he spent a night in the woods and no officers came, he returned home at 10 a.m. local time on July 31, asked his mother if police had been looking for him, then left at 3:00 p.m. He reportedly said that he then got his weapons and returned to the woods. He planned to wait for the teens to reach the Michigan side of the river so he could kill them all and use them as "bait" to lure police into his ambush. However, Spigarelli and Pohlson unexpectedly approached his position, and he said he felt trapped and so opened fire. Spigarelli and Pohlson were both shot in the head and died instantly. Johnson then reloaded and fired across the river, killing Bryan Mort.
[ { "type": "R", "before": "Johnson was charged with", "after": "The charges consist of three counts of", "start_char_pos": 195, "end_char_pos": 219, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "he", "after": "that Johnson", "start_char_pos": 271, "end_char_pos": 273, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": ".", "after": ";", "start_char_pos": 521, "end_char_pos": 522, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "often was", "after": "was often", "start_char_pos": 803, "end_char_pos": 812, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 900, "end_char_pos": 900, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1242, "end_char_pos": 1242, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Johnson lured the", "after": "he lured a", "start_char_pos": 1319, "end_char_pos": 1336, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "local time", "start_char_pos": 1616, "end_char_pos": 1616, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "says", "after": "reportedly said that", "start_char_pos": 1708, "end_char_pos": 1712, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "Johnson said he", "after": "He", "start_char_pos": 1764, "end_char_pos": 1779, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "he", "after": "so", "start_char_pos": 2033, "end_char_pos": 2035, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] } ]
[ 0, 33, 194, 254, 434, 522, 565, 716, 777, 858, 993, 1065, 1205, 1399, 1529, 1704, 1763, 1930, 2048, 2118 ]
news
111956
2
|120px|Barack Obama120px|John McCain United States presidential candidates Barack Obama and John McCain are statistically tied according to the latest Day to Day Politics Poll Average as the 2008 Democratic National Convention starts today. The 7-day average shows that the difference between the two candidates is within the margin of error . Barack Obama is polling at 46.4\% and John McCain is at 44.8\% with a margin of error of 0.82\%. The Gallup Poll for the last two days has the candidates tied at 45\%. The lead for Obama has dropped since mid August when it reached its peak for the month at a 3.1\% margin. Obama has also recently added Senator Joe Biden to his nomination ticket. Polls show that only 54\% of registered voters believe that Joe Biden is an excellent or good choice for Obama's running mate. Hillary Clinton is expected to release her delegates to Barack Obama on Wednesday. And Clinton's supporters are split over Obamachoosing Biden instead of Clinton as the vice presidential nominee. Polls show that less than half of Hillary Clinton supporters are sold on Obama as the Democratic presidential nominee.
|120px|Barack Obama120px|John McCain United States presidential candidates Barack Obama and John McCain are statistically tied according to the latest Day to Day Politics Poll Average as the 2008 Democratic National Convention starts today. The 7-day average shows that the difference between the two candidates is within the margin of error of 0.82\%: Barack Obama is polling at 46.4\% and John McCain is at 44.8\% . The Gallup Poll for the last two days has the candidates tied at 45\%. The lead for Obama has dropped since mid-August when it reached its peak for the month at a 3.1\% margin. Obama has recently added Senator Joe Biden to his nomination ticket. Polls show that only 54\% of registered voters believe that Joe Biden is an excellent or good choice for Obama's running mate. Hillary Clinton is expected to release her delegates to Barack Obama on Wednesday. Clinton's supporters are split on Obama's choice of Biden over Clinton as vice presidential nominee. Polls show that fewer than half of Hillary Clinton supporters are sold on Obama as the Democratic presidential nominee.
[ { "type": "R", "before": ".", "after": "of 0.82\\%:", "start_char_pos": 342, "end_char_pos": 343, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "with a margin of error of 0.82\\%.", "after": ".", "start_char_pos": 407, "end_char_pos": 440, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "mid August", "after": "mid-August", "start_char_pos": 549, "end_char_pos": 559, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "also", "after": null, "start_char_pos": 628, "end_char_pos": 632, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "D", "before": "And", "after": null, "start_char_pos": 902, "end_char_pos": 905, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "over Obamachoosing Biden instead of Clinton as the", "after": "on Obama's choice of Biden over Clinton as", "start_char_pos": 937, "end_char_pos": 987, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "R", "before": "less", "after": "fewer", "start_char_pos": 1031, "end_char_pos": 1035, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 240, 440, 511, 617, 691, 818, 901, 1014 ]
null
112955
1
The track was wet during most of the Friday practice sessions and all the qualificaion triplet, a rare event for the Formula One in Monza. Heavy rain weather conditions on the racing circuit caused a couple of surprises for the spectators. First was the major defeat of Kimi Rikknen (Ferrari) and Lewis Hamilton (McLaren-Mercedes), two major competitors in the Drivers' Championship. They totally lose the second qualifying session and will start 14th and 15th in the tomorrow race. This was accompanied by Vettel setting the best time in the session of 1:35.837. The German driver repeatedly set the best time in the final session. His podium was endangered by Felipe Massa, but the second Ferrari driver came only 6 in his final attempt and Heikki Kovalainen, Hamilton's double in McLaren, who made better setting second time just before the chequered flag closed the session.
The track was wet during most of the Friday practice sessions and all the qualification triplet, a rare event for the Formula One at Monza. Heavy rain conditions on the racing circuit caused a couple of surprises for the spectators. First was the major defeat of Kimi Rikknen (Ferrari) and Lewis Hamilton (McLaren-Mercedes), two major competitors in the Drivers' Championship. They totally lost the second qualifying session and will start 14th and 15th in the tomorrow race. This was accompanied by Vettel setting the best time in the session of 1:35.837. The German driver repeatedly set the best time in the final session. His pole position was endangered by Felipe Massa, but the second Ferrari driver came only 6th in his final attempt and Heikki Kovalainen, Hamilton's teammate in McLaren, made better progress netting second position just before the chequered flag closed the session.
[ { "type": "R", "before": "qualificaion", "after": "qualification", "start_char_pos": 74, "end_char_pos": 86, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "in", "after": "at", "start_char_pos": 129, "end_char_pos": 131, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "D", "before": "weather", "after": null, "start_char_pos": 150, "end_char_pos": 157, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "lose", "after": "lost", "start_char_pos": 397, "end_char_pos": 401, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "podium", "after": "pole position", "start_char_pos": 637, "end_char_pos": 643, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "6", "after": "6th", "start_char_pos": 716, "end_char_pos": 717, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "double", "after": "teammate", "start_char_pos": 773, "end_char_pos": 779, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "who made better setting second time", "after": "made better progress netting second position", "start_char_pos": 792, "end_char_pos": 827, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] } ]
[ 0, 138, 239, 383, 482, 563, 632 ]
news
113448
1
Large Hadron Collider tunnel and dipole magnets. Large Hadron Collider (LHC) in CERN, Geneva has suffered light damage on September 19, 2008 when one of the giant superconducting magnets that guide the protons failed during a test. A large amount of helium, which is used to cool the magnets to 1.9 Kelvin (-271C; -456F) leaked into the collider tunnel. LHC will now be shut down for at least two months for repairs. Physicists say such setbacks are an inevitable part of starting up such a large and complicated machine. Several mishaps, including the failure of a 30 ton electrical transformer, have slowed LHC's progress since the initial start-up on September 10, 2008. The laboratory in a statement said that an electrical connection between the magnets had melted because of the high current. The machine has more than 1,200 dipole magnets arranged end-to-end in the underground ring. These magnets carry and steer the proton beams which will accelerate around the machine at close to the speed of light. One of the LHC's eight sectors will now have to be warmed up to well above its operating temperature so that repairs can take place. But hopes that the first trial collisions would be carried out before the machine's official inauguration on October 21, 2008 now looks doubtful. It even looks uncertain whether this can be achieved before 2009.
Large Hadron Collider tunnel and dipole magnets. The Large Hadron Collider (LHC) in CERN, Geneva suffered light damage on September 19, 2008 when one of the giant superconducting magnets that guide the protons failed during a test. A large amount of helium, which is used to cool the magnets to 1.9 Kelvin (-271C; -456F) leaked into the collider tunnel. LHC will now be shut down for at least two months for repairs. Physicists say such setbacks are an inevitable part of starting up such a large and complicated machine. Several mishaps, including the failure of a 30 ton electrical transformer, have slowed LHC's progress since the initial start-up on September 10, 2008. The laboratory said in a statement that an electrical connection between the magnets had melted because of the high current. The machine has more than 1,200 dipole magnets arranged end-to-end in the underground ring. These magnets carry and steer the proton beams which will accelerate around the machine at close to the speed of light. One of the LHC's eight sectors will now have to be warmed up to well above its operating temperature so that repairs can take place. The recent setbacks, however, mean that hopes the first trial collisions would be carried out before the machine's official inauguration on October 21, 2008 now look doubtful. It even looks uncertain whether this can be achieved before 2009.
[ { "type": "A", "before": null, "after": "The", "start_char_pos": 49, "end_char_pos": 49, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": "has", "after": null, "start_char_pos": 94, "end_char_pos": 97, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "A", "before": null, "after": "said", "start_char_pos": 690, "end_char_pos": 690, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "D", "before": "said", "after": null, "start_char_pos": 706, "end_char_pos": 710, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "But hopes that", "after": "The recent setbacks, however, mean that hopes", "start_char_pos": 1146, "end_char_pos": 1160, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "looks", "after": "look", "start_char_pos": 1276, "end_char_pos": 1281, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] } ]
[ 0, 48, 232, 314, 354, 417, 522, 674, 800, 892, 1012, 1145, 1291 ]
news
113572
1
On Monday night, Australian Football player Adam Cooney won the Australian Football League's most prestigious award, the Brownlow Medal, awarded to the AFL's best and fairest. He received 24 votes, behind Simon Black with 23 votes and Gary Ablett and Matthew Richardson with 22 votes. Cooney took the lead after 20 rounds of votes has been cast ( out of twenty-two), and is the first Western Bulldogs player to win the medal since 1992. It should also be noted that AFL CEO, Andrew Demitriew mistakenly started calling out Round 2 's votes at the start of the count, when Round 1's votes should have been called out.
On Monday night, Australian Football player Adam Cooney won the Australian Football League's most prestigious award, the Brownlow Medal, awarded to the AFL's best and fairest. He received 24 votes, ahead of Simon Black with 23 votes and Gary Ablett and Matthew Richardson with 22 votes. Cooney took the lead after 20 rounds of votes had been cast ( from twenty-two), and is the first Western Bulldogs player to win the medal since 1992. It should also be noted that AFL CEO, Andrew Demetriou mistakenly started reading out the votes for Round 2 first, instead of the votes for Round 1.
[ { "type": "R", "before": "behind", "after": "ahead of", "start_char_pos": 198, "end_char_pos": 204, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "has", "after": "had", "start_char_pos": 331, "end_char_pos": 334, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "out of", "after": "from", "start_char_pos": 347, "end_char_pos": 353, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "Demitriew mistakenly started calling out", "after": "Demetriou mistakenly started reading out the votes for", "start_char_pos": 482, "end_char_pos": 522, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "'s votes at the start of the count, when Round 1's votes should have been called out.", "after": "first, instead of the votes for Round 1.", "start_char_pos": 531, "end_char_pos": 616, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 175, 284, 436 ]
news
115079
1
Santa Ana wind conditions as seen from space. Two large wildfires burn uncontrolled north of Los Angeles, California. The blazes, known as the Marek fires, have burned over 3700 acres . More than 1200 people have been evacuated and one confirmed fatality has been reported . The Associated Press reports a second related death from a traffic accident. 30 mobile homes have been destroyed. Authorities expect to order more evacuations before the fires can be brought under control. Fires started Sunday due to Santa Ana wind conditions in the San Fernando Valley and Angeles National Forest on the northern outskirts of Los Angeles. Affected communities include Porter Ranch and the Lopez Canyon area. The confirmed fatality was an unidentified transient who had been using a cardboard shelter beneath a freeway overpass. Santa Ana winds as strong as 65 miles per hour fanned the flames, which jumped the eight lane 210 Freeway. Both the 210 Freeway and 118 Freeway were closed during Monday morning rush hour. Firefighters have contained smaller blazes that occurred elsewhere in Southern California near the Los Angeles suburb of Santa Clarita and in neighboring Ventura County. Los Angeles County fire inspector Frank Garrido described the problem as "a blowtorch we cant get in front of," according to The New York Times. He continued "Wind is king here, its dictating everything we are doing . " Scott Stephens of the Center for Fire Research & Outreach at the University of California, Berkeley calls Southern California's Santa Ana winds "some of the strongest, most severe fire winds in the world." Among the problems caused by Santa Ana winds, which blow from the nearby Mojave Desert toward the Pacific Ocean, is a tendency for hot embers to leapfrog and start new fires. Santa Ana conditions tend to occur from autumn through spring and can reach peak speeds of 70 miles per hour (113 kilometers per hour).
Santa Ana wind conditions as seen from space. Two large wildfires burn uncontrolled north of Los Angeles, California. The blazes, known as the Marek fires, have burned over 3700 acres and caused the evacuation of more than 1200 people . Furthermore, at least 30 mobile homes were destroyed. One confirmed fatality has been reported and described as an unidentified transient who had been using a cardboard shelter beneath a freeway overpass. However, the Associated Press reports a second related death from a traffic accident. Authorities expect to order more evacuations before the fires can be brought under control. Fires started Sunday due to Santa Ana wind conditions in the San Fernando Valley and Angeles National Forest on the northern outskirts of Los Angeles. Affected communities include Porter Ranch and the Lopez Canyon area. Santa Ana winds as strong as 65 miles per hour fanned the flames, which jumped the eight-lane 210 Freeway. Both the 210 Freeway and 118 Freeway were closed during Monday morning rush hour. Firefighters have contained smaller blazes that occurred elsewhere in Southern California near the Los Angeles suburb of Santa Clarita and in neighbouring Ventura County. Los Angeles County fire inspector Frank Garrido described the problem as "a blowtorch we cant get in front of," according to The New York Times. "Wind is king here, its dictating everything we are doing , " he continued. Scott Stephens of the Center for Fire Research & Outreach at the University of California, Berkeley calls Southern California's Santa Ana winds "some of the strongest, most severe fire winds in the world." Among the problems caused by Santa Ana winds, which blow from the nearby Mojave Desert toward the Pacific Ocean, is a tendency for hot embers to leapfrog and start new fires. Santa Ana conditions tend to occur from autumn through spring and can reach peak speeds of 70 miles per hour (113 kilometers per hour).
[ { "type": "R", "before": ". More", "after": "and caused the evacuation of more", "start_char_pos": 184, "end_char_pos": 190, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "meaning-changed" ] }, { "type": "R", "before": "have been evacuated and one", "after": ". Furthermore, at least 30 mobile homes were destroyed.", "start_char_pos": 208, "end_char_pos": 235, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "One", "start_char_pos": 236, "end_char_pos": 236, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "R", "before": ". The", "after": "and described as an unidentified transient who had been using a cardboard shelter beneath a freeway overpass. However, the", "start_char_pos": 274, "end_char_pos": 279, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "30 mobile homes have been destroyed.", "after": null, "start_char_pos": 353, "end_char_pos": 389, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "D", "before": "The confirmed fatality was an unidentified transient who had been using a cardboard shelter beneath a freeway overpass.", "after": null, "start_char_pos": 702, "end_char_pos": 821, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "eight lane", "after": "eight-lane", "start_char_pos": 905, "end_char_pos": 915, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "neighboring", "after": "neighbouring", "start_char_pos": 1153, "end_char_pos": 1164, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "He continued", "after": null, "start_char_pos": 1326, "end_char_pos": 1338, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ".", "after": ",", "start_char_pos": 1397, "end_char_pos": 1398, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "he continued.", "start_char_pos": 1401, "end_char_pos": 1401, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] } ]
[ 0, 45, 117, 185, 275, 352, 389, 481, 632, 701, 821, 928, 1010, 1180, 1325, 1607, 1782 ]
news
115154
1
The change takes effect immediately, meaning that people who were due to take the test this year no longer have to do so. Education secretary Ed Balls yesterday announced his plans to radically change the UK testing system yesterday, in parliament. Both major UK opposition parties welcomed this move. Michael Gove, an MP for the Conservative Party that serves as shadow children's secretary, stated that his party have "argued for fewer national tests and more rigor and we want to work constructively to improve the assessment and qualifications regime." David Laws, children's spokesperson for the Liberal Democrats, said that "the SAT (Standardized Assessment Task) tests taken by 14-year-olds are not only a waste of time but have been highly unreliable over the last few years."
The change takes effect immediately, meaning that children who were due to take the test this year no longer have to do so. Secretary of State for Children, Schools and Families Ed Balls yesterday announced his plans to radically change the UK testing system yesterday, in parliament. Both major UK opposition parties welcomed this move. Conservative Shadow Secretary of State for Children Michael Gove stated that his party have "argued for fewer national tests and more rigor and we want to work constructively to improve the assessment and qualifications regime." David Laws, Shadow Secretary of State for Children for the Liberal Democrats, said that "the Sats tests taken by 14-year-olds are not only a waste of time but have been highly unreliable over the last few years."
[ { "type": "R", "before": "people", "after": "children", "start_char_pos": 50, "end_char_pos": 56, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Education secretary", "after": "Secretary of State for Children, Schools and Families", "start_char_pos": 122, "end_char_pos": 141, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Michael Gove, an MP for the Conservative Party that serves as shadow children's secretary,", "after": "Conservative Shadow Secretary of State for Children Michael Gove", "start_char_pos": 302, "end_char_pos": 392, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "children's spokesperson for", "after": "Shadow Secretary of State for Children for", "start_char_pos": 569, "end_char_pos": 596, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "SAT (Standardized Assessment Task)", "after": "Sats", "start_char_pos": 635, "end_char_pos": 669, "major_intent": "others", "raw_intents": [ "clarity", "others", "others" ] } ]
[ 0, 121, 248, 301, 556 ]
news
120486
1
The statement said "I'm 23 years old dudes, and like, the successes I have had in the pool, I acted in a bummer way, not how that people want from me. I am sorry. I promise my fans and the public - it will not happen again. I totally have the munchies, are there any Doritos in the house? "
The statement said "I'm 23 years old , and despite the successes I have had in the pool, I acted in a youthful and inappropriate way, not in a manner that people have come to expect from me. For this, I am sorry. I promise my fans and the public - it will not happen again. "
[ { "type": "R", "before": "dudes, and like,", "after": ", and despite", "start_char_pos": 37, "end_char_pos": 53, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "bummer", "after": "youthful and inappropriate", "start_char_pos": 105, "end_char_pos": 111, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "how that people want", "after": "in a manner that people have come to expect", "start_char_pos": 121, "end_char_pos": 141, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "For this,", "start_char_pos": 151, "end_char_pos": 151, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "D", "before": "I totally have the munchies, are there any Doritos in the house?", "after": null, "start_char_pos": 225, "end_char_pos": 289, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] } ]
[ 0, 150, 163, 224 ]
news
122030
1
The professional and business service sector lost 180,000 jobs last month, while manufacturing shed 168,000. The construction industry , which peaked at 1.1 million jobs in January 2007 , lost 104,000 jobs last month. At the same time, the financial sector lost 44,000 jobs, bringing the total to 448,000 lost jobs since a peak in December 2007.
The professional and business service sector lost 180,000 jobs last month, while manufacturing shed 168,000. The construction industry has lost 1.1 million jobs since January 2007 and 104,000 jobs last month. At the same time, the financial sector lost 44,000 jobs, bringing the total to 448,000 lost jobs since a peak in December 2007.
[ { "type": "R", "before": ", which peaked at", "after": "has lost", "start_char_pos": 135, "end_char_pos": 152, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "in", "after": "since", "start_char_pos": 170, "end_char_pos": 172, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ", lost", "after": "and", "start_char_pos": 186, "end_char_pos": 192, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] } ]
[ 0, 108, 217 ]
news
130713
1
Karl Von Hess, the American professional wrestler who wrestling persona was that of a Nazi sympathiser has died at the age of 90. The cause of death was linked to a battle with Alzheimers disease which he had battled in recent years. Born as Francis Faketty he first trained as a lifeguard and began to teach swimming. He joined the U.S Navy and served in World War II. After completing his service he began to train as a wrestler , he tried several gimmicks before coming up with the Von Hess gimmick. He took his idea off of Kurt Von Poppenheim who has a less sinister German gimmick. Fakerry legally changed his name to Karl Von Hess and was signed by the WWWF later the WWF. He was so convincing as a heel that people tried to stab and attack him. His gimmick became so controversial that promoter Vince McMahon has to calm matters by doing an interview with the Washington Post. He said that Von Hess is no Nazi. He uses that silly salute to point up the act that he is the villain". Von Hess defended his choice of gimmick saying "It was right after the war and I had tried everything. I played different characters, and then I came up with this gimmick of Von Hess and I played it right to the hilt." Von Hess began to draw big crowds until the early 1960s when his gimmick began to wear thin; he was phased out of the WWWF in the late 60s.
Karl Von Hess, the American professional wrestler whose wrestling persona was that of a Nazi sympathiser has died at the age of 90. The cause of death was linked to a battle with Alzheimers disease which he had battled in recent years. Born as Francis Faketty he first trained as a lifeguard and began to teach swimming. He joined the U.S Navy and served in World War II. After completing his service he began to train as a wrestler . He tried several gimmicks before coming up with the Von Hess gimmick. He took his idea off of Kurt Von Poppenheim who has a less sinister German gimmick. Fakerry legally changed his name to Karl Von Hess and was signed by the WWWF , later the WWF. He was so convincing as a heel that people tried to stab and attack him. His gimmick became so controversial that promoter Vince McMahon had to calm matters by doing an interview with the Washington Post. He said that Von Hess is no Nazi. He uses that silly salute to point up the act that he is the villain". Von Hess defended his choice of gimmick saying , "It was right after the war and I had tried everything. I played different characters, and then I came up with this gimmick of Von Hess and I played it right to the hilt." Von Hess began to draw big crowds until the early 1960s when his gimmick began to wear thin; he was phased out of the WWWF in the late 60s.
[ { "type": "R", "before": "who", "after": "whose", "start_char_pos": 50, "end_char_pos": 53, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", he", "after": ". He", "start_char_pos": 431, "end_char_pos": 435, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 664, "end_char_pos": 664, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "has", "after": "had", "start_char_pos": 817, "end_char_pos": 820, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1037, "end_char_pos": 1037, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 129, 233, 318, 369, 502, 586, 679, 752, 884, 918, 989, 1093, 1209, 1302 ]
news
13557
1
A grandfather and his grandson have died off the coast of North Wales, UK after their sailing dinghy was overwhelmed and flooded in rough seas. They were with a party of six who had been at sea angling from a 4.6 m boat. A RAF search and rescue helicopter was scrambled and the Beaumaris lifeboat launched to affect a rescue effort. Two people were found clinging to the boat but two more had been washed away. A rapid search recovered both of them.
A grandfather and his grandson have died off the coast of North Wales, UK , after their sailing dinghy was overwhelmed and capsized in rough seas. They were with a party of four who had been angling from a 4.6 m boat. A RAF search and rescue helicopter was scrambled and the Beaumaris lifeboat launched to effect a rescue effort. Two people were found clinging to the boat but two more had been washed away. A rapid search recovered both of them.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 74, "end_char_pos": 74, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "flooded", "after": "capsized", "start_char_pos": 122, "end_char_pos": 129, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "six", "after": "four", "start_char_pos": 171, "end_char_pos": 174, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "at sea", "after": null, "start_char_pos": 188, "end_char_pos": 194, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "affect", "after": "effect", "start_char_pos": 310, "end_char_pos": 316, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] } ]
[ 0, 144, 221, 333, 411 ]
news
146168
1
A second day of winter weather and snow has caused further disruption in the UK. At 15:13 GMT , one Wikinews contributor in West Lothian described the conditions there as 'white-out', although later conditions were reported to have improved. Forecasters have indicated further snow in Scotland later this evening , with some higher ground experiencing as much as 15 centimetres into Sunday . Heavy snow for parts of Northern England is also forecast.
A second day of winter weather and snow has caused further disruption in the UK. At 15:13 GMT (December 19th) , one Wikinews contributor in West Lothian described the conditions there as 'white-out', although later conditions were reported to have improved. Forecasters had indicated further snow in Scotland later in the evening(19th) , with some higher ground experiencing as much as 15 centimetres into Sunday (20th) . Heavy snow for parts of Northern England had also been forecast.
[ { "type": "A", "before": null, "after": "(December 19th)", "start_char_pos": 94, "end_char_pos": 94, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "have", "after": "had", "start_char_pos": 255, "end_char_pos": 259, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "this evening", "after": "in the evening(19th)", "start_char_pos": 301, "end_char_pos": 313, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "(20th)", "start_char_pos": 391, "end_char_pos": 391, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "is also", "after": "had also been", "start_char_pos": 435, "end_char_pos": 442, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 80, 242, 393 ]
news
146168
2
A second day of winter weather and snow has caused further disruption in the UK. At 15:13 GMT (December 19th) , one Wikinews contributor in West Lothian described the conditions there as 'white-out', although later conditions were reported to have improved. Forecasters had indicated further snow in Scotland later in the evening(19th) , with some higher ground experiencing as much as 15 centimetres into Sunday (20th) . Heavy snow for parts of Northern England had also been forecast. In addition to transport disruption, a number of sports have been affected. At least 20 football fixtures across the UK have been postponed, with some rugby matches also affected. Race meetings at Haydock, Newcastle were cancelled due to the conditions, a planned meet at Ascot having been abandoned earlier. A National Hunt meeting at Carlisle yesterday was also abandoned.
A second day of winter weather and snow has caused further disruption in the UK. At 15:13 GMT last Saturday , one Wikinews contributor in West Lothian described the conditions there as 'white-out', although later conditions were reported to have improved. Forecasters had indicated further snow in Scotland later into Saturday evening , with some higher ground experiencing as much as 15 centimetres into Sunday . Heavy snow for parts of Northern England had also been forecast. In addition to transport disruption, a number of sports have been affected. At least twenty football fixtures across the UK have been postponed, with some rugby matches also affected. Race meetings at Haydock, Newcastle were cancelled due to the conditions, a planned meet at Ascot having been abandoned earlier. A National Hunt meeting at Carlisle yesterday was also abandoned.
[ { "type": "R", "before": "(December 19th)", "after": "last Saturday", "start_char_pos": 94, "end_char_pos": 109, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in the evening(19th)", "after": "into Saturday evening", "start_char_pos": 315, "end_char_pos": 335, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "(20th)", "after": null, "start_char_pos": 413, "end_char_pos": 419, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "20", "after": "twenty", "start_char_pos": 572, "end_char_pos": 574, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] } ]
[ 0, 80, 257, 421, 486, 562, 666, 795 ]
news
147467
1
A United Airlines Airbus A319, similar to the one involved in the incident. United Airlines Flight 634 made a very rough emergency landing yesterday morning when only two of the three landing wheels on the Airbus A319 deployed as the plane was making its final approach into Newark, New Jersey's Newark Liberty International Airport. After multiple attempts to fix the problem failed, the pilot decided that an emergency landing was the only remaining choice. "Brace! Brace! Brace!", the pilot announced over the plane's public address system, calling for passengers to enter the brace position as the plane made a rough touchdown on the runway at Newark. The plane touched down on the nose wheel and left rear wheel before lurching to the right causing sparks to fly as the right engine skidded along the runway. Eventually, the plane came to stop at 9:27 A.M. EDT. Afterward, passengers and crew evacuated the plane by sliding down the evacuation slides and moving quickly away from the plane due to fears it might explode . They were then taken by bus to the airline's lounge were they told their stories to investigators and waited for their luggage. Three passengers reported minor injuries but refused treatment. According to passengers, the crew remained calm during the ordeal. Jim Falk said, "They did a great job. There was no yelling, screaming, panicking or anything." Falk said he wanted to buy a bottle of Champagne to the currently unnamed pilot. Falk added, "The pilot did a beautiful job. He didnt put it in the water like the other pilot did, but he should be commended."
A United Airlines Airbus A319, similar to the one involved in the incident. United Airlines Flight 634 made an emergency landing yesterday morning when only two of the three landing wheels on the Airbus A319 deployed as the plane was making its final approach into Newark, New Jersey's Newark Liberty International Airport. After multiple attempts to fix the problem failed, the pilot decided that an emergency landing was the only remaining choice. "Brace! Brace! Brace!", the pilot announced over the plane's public address system, calling for passengers to enter the brace position as the plane made a touchdown on the runway at Newark. The plane touched down on the nose wheel and left rear wheel before lurching to the right causing sparks to fly as the right engine skidded along the runway. Eventually, the plane came to stop at 9:27 A.M. EDT. Afterward, passengers and crew evacuated the plane by sliding down the evacuation slides and moving quickly away from the plane . They were then taken by bus to the airline's lounge where they told their stories to investigators and waited for their luggage. Three passengers reported minor injuries but refused treatment. According to passengers, the crew remained calm during the ordeal. Jim Falk said, "They did a great job. There was no yelling, screaming, panicking or anything." Falk said he wanted to buy a bottle of champagne for the currently unnamed pilot. Falk added, "The pilot did a beautiful job. He didnt put it in the water like the other pilot did, but he should be commended."
[ { "type": "R", "before": "a very rough", "after": "an", "start_char_pos": 108, "end_char_pos": 120, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "rough", "after": null, "start_char_pos": 615, "end_char_pos": 620, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "due to fears it might explode", "after": null, "start_char_pos": 995, "end_char_pos": 1024, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "were", "after": "where", "start_char_pos": 1079, "end_char_pos": 1083, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Champagne to", "after": "champagne for", "start_char_pos": 1420, "end_char_pos": 1432, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 75, 333, 459, 474, 655, 813, 1026, 1154, 1218, 1285, 1323, 1380, 1461, 1505 ]
news
14932
1
TD Banknorth, which is 51\% owned by the TD Financial Group , has come to a deal with the Hudson United Bancorp . TD Financial group will be acquiring the community bank for $1.9 billion U.S. The new addition to the financial group will give them 204 new branches across New Jersey, New York, Connecticut and Pennsylvania. In total this will give TD 590 branches, 751 banking machines and more than $ 26-billion U.S. in deposits across eight northeastern states. Hudson specialises in commercial real estate, consumer and credit card loans to individuals and businesses. The bank also had $8.85 billion US in assets at the end of it's first quarter, on March 31. The company's shares had been dropping in the course of the past year because of allegations of money-laundering violations and after an earnings warning, making it a good steal for TD. The acquisition will greatly increased TD's influence in America .
Canadian TD Financial Group has come to a deal with the regional U.S. bank, Hudson United Bancorp , to buy Hudson for US$1.9 billion. The new addition will be folding into itsMaine-based TD Banknorth, which is 51\% owned by TD Financial. The acquisition will bring in 204 new branches and increase TD's footprint to New Jersey, New York, Connecticut and Pennsylvania. In total this will give TD 590 branches, 751 banking machines and more than US$ 26-billion in deposits across eight northeastern states. Hudson specialises in commercial real estate, consumer and credit card loans to individuals and businesses. The bank also had $8.85 billion US in assets at the end of it's first quarter, on March 31. The company's shares had been dropping in the course of the past year because of allegations of money-laundering violations and after an earnings warning, making it a good steal for TD. The acquisition will greatly increased TD's influence in America . This continues the recent trend for Canadian banks expanding into the U.S. where regulation on bank mergers is less strict than in their home country .
[ { "type": "R", "before": "TD Banknorth, which is 51\\% owned by the TD Financial Group ,", "after": "Canadian TD Financial Group", "start_char_pos": 0, "end_char_pos": 61, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "regional U.S. bank,", "start_char_pos": 90, "end_char_pos": 90, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": ". TD Financial group will be acquiring the community bank for $1.9 billion U.S. The new addition to the financial group will give them 204 new branches across New Jersey, New York, Connecticut and Pennsylvania. In total this will give TD 590 branches, 751 banking machines and more than $", "after": ", to buy Hudson for US$1.9 billion. The new addition will be folding into itsMaine-based TD Banknorth, which is 51\\% owned by TD Financial. The acquisition will bring in 204 new branches and increase TD's footprint to New Jersey, New York, Connecticut and Pennsylvania. In total this will give TD 590 branches, 751 banking machines and more than US$", "start_char_pos": 113, "end_char_pos": 401, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "D", "before": "U.S.", "after": null, "start_char_pos": 413, "end_char_pos": 417, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": ".", "start_char_pos": 915, "end_char_pos": 915, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "This continues the recent trend for Canadian banks expanding into the U.S. where regulation on bank mergers is less strict than in their home country", "start_char_pos": 916, "end_char_pos": 916, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 323, 463, 571, 663, 849 ]
null
15757
1
Today the Hewlett-Packard Co. has announced that it will be cutting 10\% or 14,500 of it's full-time staff wil be cut over the course of the next six quarters. HP CEO Mark Hurd, who 's had office for just about three months now , has decided that he wishes to simplify HP's operations by combining sales and marketing directly into business units. The job cuts are expected to save HP $1.9 billion US annually. These cuts are all part of HP's new restructuring plan, the savings from this restructuring will primarily be to offset market forces or in other words, to strengthen HP's competitiveness in the market. The job cuts come after prolonged analyst speculation that HP would be announcing between 10,000 and 25,000 losses. Few losses are expected in the areas of Sales nor in Research & Development. The media in Ireland, where HP has over 4,000 employees across seven business units, has reacted cautiously to the announcement urging that more details should be waited for before making any comments. The news however has made national headlines and workers are understandably anxious to find out more details.
Today the Hewlett-Packard Co. has announced that 10\% or 14,500 of its full-time staff will be cut over the course of the next six quarters. HP CEO Mark Hurd, who took office about three months ago , has decided to simplify HP's operations by combining sales and marketing directly into business units. The job cuts are expected to save HP US $1.9 billion annually. These cuts are all part of HP's new restructuring plan, the savings from which will primarily be used to offset market forces , in other words, to strengthen HP's competitiveness in the market. The job cuts come after prolonged analyst speculation that HP would be announcing between 10,000 and 25,000 losses. Few losses are expected in the areas of Sales or Research & Development. The media in Ireland, where HP has over 4,000 employees across seven business units, has reacted cautiously to the announcement urging that more details should be awaited for before making any comments. The news however has made national headlines and workers are understandably anxious to find out more details.
[ { "type": "D", "before": "it will be cutting", "after": null, "start_char_pos": 49, "end_char_pos": 67, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "it's", "after": "its", "start_char_pos": 86, "end_char_pos": 90, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "wil", "after": "will", "start_char_pos": 107, "end_char_pos": 110, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "'s had office for just", "after": "took office", "start_char_pos": 182, "end_char_pos": 204, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "now", "after": "ago", "start_char_pos": 224, "end_char_pos": 227, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "D", "before": "that he wishes", "after": null, "start_char_pos": 242, "end_char_pos": 256, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "A", "before": null, "after": "US", "start_char_pos": 385, "end_char_pos": 385, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "US", "after": null, "start_char_pos": 399, "end_char_pos": 401, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "R", "before": "this restructuring", "after": "which", "start_char_pos": 485, "end_char_pos": 503, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "used", "start_char_pos": 522, "end_char_pos": 522, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "or", "after": ",", "start_char_pos": 547, "end_char_pos": 549, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "nor in", "after": "or", "start_char_pos": 778, "end_char_pos": 784, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "waited", "after": "awaited", "start_char_pos": 972, "end_char_pos": 978, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "meaning-changed" ] } ]
[ 0, 159, 347, 411, 615, 731, 808, 1010 ]
news
16289
1
Beginning 2006, all horses in the EU must have passports, according to new rules from the European Union. Two thirds of all horses in Sweden are estimated to not have passports, says Britta Lundstrm at the Swedish Ministry of Agriculture (Jordbruksverket) to the news agency TT. If the owner of a horse doesn't get a passport for their animal before the end of the year, they will be breaking the law. The Ministry of Agriculture has had problems reaching horse owners who don't reading horse magazines. Because of this, the ministry is planning a campaign this autumn. The passport contains information on the owner, the race and number. A photograph isn't mandatory however, but a description of the horse's appearance is required.
Beginning in 2006, all horses in the EU will be required to have passports, according to new rules from the European Union. Two thirds of all horses in Sweden are estimated to not have passports, Britta Lundstrm at the Swedish Ministry of Agriculture (Jordbruksverket) told news agency TT. If the owner of a horse doesn't get a passport for their animal before the end of the year, they will be breaking the law. The Ministry of Agriculture has had problems reaching horse owners who don't read horse magazines. Because of this, the ministry is planning a campaign this autumn. The passport contains information on the owner, the race and number. A photograph isn't mandatory but a description of the horse's appearance is required.
[ { "type": "A", "before": null, "after": "in", "start_char_pos": 10, "end_char_pos": 10, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "must", "after": "will be required to", "start_char_pos": 38, "end_char_pos": 42, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "D", "before": "says", "after": null, "start_char_pos": 179, "end_char_pos": 183, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "to the", "after": "told", "start_char_pos": 257, "end_char_pos": 263, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "reading", "after": "read", "start_char_pos": 480, "end_char_pos": 487, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "however,", "after": null, "start_char_pos": 669, "end_char_pos": 677, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] } ]
[ 0, 106, 279, 402, 504, 570, 639 ]
news
16985
1
Germany's unemployment rate fell by one-tenth per cent in July, Federal Labour Office said. The number of jobless Germans dropped by 42,000 to 4.8 million, but without seasonal adjustments, including summer breaks in job training and holidays of the jobseekers , it grew by 68,000 and is close to a post-war record. Unemployment can even rise 370,000 by end of the year, according to German research institute IAB. Labour-market problems remain the main issue of the forthcoming Bundestag elections campaign. Chancellor Schrder's government is blamed for fail of his social policy reform known as Hartz IV and growing joblessness . In recent polls ruling Social Democrats (SPD) gained 27\% as the opposition conservatives have 44\% of votes .
Germany's unemployment rate fell by one-tenth per cent in July, the Federal Labour Office said. The number of jobless Germans dropped by 42,000 to 4.8 million, but without seasonal adjustments, including summer breaks in job training and jobseeker holidays , it grew by 68,000 and is close to a post-war record. Unemployment is projected to rise by 370,000 by the end of the year, according to German research institute IAB. Labour-market problems remain the main issue of the Bundestag election campaign. Chancellor Schrder's government is blamed for the failure of his social policy reform known as Hartz IV and growing unemployment . In recent polls the ruling Social Democrats (SPD) got 27\% of the vote whilst the opposition Christian Democrats got 44\% .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 64, "end_char_pos": 64, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "holidays of the jobseekers", "after": "jobseeker holidays", "start_char_pos": 235, "end_char_pos": 261, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "can even rise", "after": "is projected to rise by", "start_char_pos": 330, "end_char_pos": 343, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 355, "end_char_pos": 355, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "forthcoming Bundestag elections", "after": "Bundestag election", "start_char_pos": 469, "end_char_pos": 500, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "fail", "after": "the failure", "start_char_pos": 557, "end_char_pos": 561, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "R", "before": "joblessness", "after": "unemployment", "start_char_pos": 620, "end_char_pos": 631, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 650, "end_char_pos": 650, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "gained", "after": "got", "start_char_pos": 681, "end_char_pos": 687, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "as the opposition conservatives have", "after": "of the vote whilst the opposition Christian Democrats got", "start_char_pos": 693, "end_char_pos": 729, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "of votes", "after": null, "start_char_pos": 735, "end_char_pos": 743, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] } ]
[ 0, 92, 316, 416, 510, 633 ]
news
1736629
2
Seal The Federal Aviation Administration (FAA) announced yesterday that the North Metroplex project has been successfully put into place, promising more efficiency in the U.S. . This project is a "new air-traffic system that will deliver more on-time flights for passengers while reducing pollution by thousands of metric tons each year," said Aviation Media reporter Mary-Anne Baldwin. NextGen is a system the FAA claims would allow aircraft to fly shorter routes, save up to 4.1 million gallons of fuel each year and cut carbon emissions by about 41, 000 metric tons. It uses -based technology as opposed to older ground-radar-based technology to allow to pinpoint aircraft with greater precision and give pilots more accurate information . To date, the North Texas Metroplex NextGen Project took three years, cost about $6 million and is amongst the largest in the country. A similar project has been underway in since May and more projects are proposed in major cities such as ; , ; and , . North Texas is one of the 13 large multi-airport urban areas where air operations can be inefficient because of air traffic congestion and environmental concerns. These initiatives are very expensive to create and the deadline to have the entire NextGen system finished originally to be by the year 2020 is approaching, but FAA Administrator Michael Huerta expressed hope this Metroplex system would go into place at airports around the country. The result is a solution that not only benefits the National Airspace System, it benefits the aviation industry, the environment and the traveling public," he said .
Seal The Federal Aviation Administration (FAA) announced yesterday that the North Metroplex project has been successfully put into place, promising more efficiency in the U.S. . Accordring to Secretary , this system will save fuel and reduce the emissions from , thereby benefiting the . The FAA said this system could reduce distances flown by one million annually and could save tens of thousands of of annually . To date, the North Texas Metroplex NextGen Project is amongst the largest in the country. A similar project has been underway in since May and more projects are proposed in major cities such as ; , ; and , . A is a large multi-airport urban area where air operations can be inefficient because of air traffic congestion and environmental concerns. These initiatives are very expensive to put in place and the deadline to have the entire NextGen system put into place originally to be by the year 2020 is approaching, but FAA Administrator Michael Huerta expressed hope this Metroplex system would go into place at airports around the country. NextGen is a system the FAA claims would allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. It uses -based technology as opposed to older ground-radar-based technology to allow to pinpoint aircraft with greater precision and give pilots more accurate information .
[ { "type": "D", "before": "This project is a \"new air-traffic system that will deliver more on-time flights for passengers while reducing pollution by thousands of metric tons each year,\" said Aviation Media reporter Mary-Anne Baldwin.", "after": null, "start_char_pos": 178, "end_char_pos": 386, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "NextGen is a system the FAA claims would allow aircraft to fly shorter routes, save up to 4.1 million gallons of fuel each year and cut carbon emissions by about 41, 000 metric tons. It uses -based technology as opposed to older ground-radar-based technology to allow to pinpoint aircraft with greater precision and give pilots more accurate information", "after": "Accordring to Secretary , this system will save fuel and reduce the emissions from , thereby benefiting the . The FAA said this system could reduce distances flown by one million annually and could save tens of thousands of of annually", "start_char_pos": 387, "end_char_pos": 740, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "D", "before": "took three years, cost about $6 million and", "after": null, "start_char_pos": 794, "end_char_pos": 837, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "North Texas is one of the 13", "after": "A is a", "start_char_pos": 995, "end_char_pos": 1023, "major_intent": "style", "raw_intents": [ "style", "style", "clarity" ] }, { "type": "R", "before": "areas", "after": "area", "start_char_pos": 1050, "end_char_pos": 1055, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "create", "after": "put in place", "start_char_pos": 1198, "end_char_pos": 1204, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "finished", "after": "put into place", "start_char_pos": 1256, "end_char_pos": 1264, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "The result is a solution that not only benefits the National Airspace System, it benefits the aviation industry, the environment and the traveling public,\" he said", "after": "NextGen is a system the FAA claims would allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. It uses -based technology as opposed to older ground-radar-based technology to allow to pinpoint aircraft with greater precision and give pilots more accurate information", "start_char_pos": 1441, "end_char_pos": 1604, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] } ]
[ 0, 386, 569, 742, 876, 982, 1157, 1440 ]
null
17590
1
Imposing daily gatherings of sometimes nearly 200 people near the nation's capital in Herndon, Virginia , consists of people who do much of the hard and dangerous work not wanted by most U.S. citizens. It has drawn the ire and compassion of the Herndon community. A public hearing Monday night to discuss a new plan drew an overflowing crowd to the Town Council chambers. The Commission said so many people signed up to speak at the hearing that the panel will need to convene again Tuesday night.
Each day, nearly 200 people gather near the nation's capital in Herndon, Virginia ; these laborers do much of the hard and dangerous work not wanted by most U.S. citizens. This daily gathering has drawn both the ire and compassion of the Herndon community. A public hearing Monday night to discuss a new plan drew an overflowing crowd to the Town Council chambers. The Commission said so many people signed up to speak at the hearing that the panel will need to convene again Tuesday night.
[ { "type": "R", "before": "Imposing daily gatherings of sometimes", "after": "Each day,", "start_char_pos": 0, "end_char_pos": 38, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "gather", "start_char_pos": 57, "end_char_pos": 57, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ", consists of people who", "after": "; these laborers", "start_char_pos": 105, "end_char_pos": 129, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "It has drawn", "after": "This daily gathering has drawn both", "start_char_pos": 203, "end_char_pos": 215, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] } ]
[ 0, 202, 264, 372 ]
news
1928792
1
Flag of ISIL. A video purporting to show the execution of 21 by supporters of (ISIL) has been released yesterday . The video shows them being beheaded in a location apparently near in . The captives, all shown being executed in orange in the video, were picked up in , a coastal town in Libya, during December and January. The video asserts the Christians were targeted by ISIL because of their religion. The Coptic Orthodox Church stated they were "confident" justice would be done on those who executed their followers . 's President stated: "Egypt and the whole world are in a fierce battle with extremist groups carrying extremist ideology and sharing the same goals".
Flag of ISIL. A video was released yesterday purporting to show the execution of 21 by supporters of (ISIL) . The video shows the prisoners being beheaded in a location apparently near in . The captives, all wearing orange in the video, were picked up in , a coastal town in Libya, during December and January. The video indicates that the Christians were targeted by ISIL because of their religion. The Coptic Orthodox Church stated they were "confident" justice would be carried out . 's President stated: "Egypt and the whole world are in a fierce battle with extremist groups carrying extremist ideology and sharing the same goals".
[ { "type": "A", "before": null, "after": "was released yesterday", "start_char_pos": 22, "end_char_pos": 22, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "has been released yesterday", "after": null, "start_char_pos": 86, "end_char_pos": 113, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "them", "after": "the prisoners", "start_char_pos": 132, "end_char_pos": 136, "major_intent": "style", "raw_intents": [ "style", "clarity", "style" ] }, { "type": "R", "before": "shown being executed in", "after": "wearing", "start_char_pos": 205, "end_char_pos": 228, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "asserts", "after": "indicates that", "start_char_pos": 334, "end_char_pos": 341, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "done on those who executed their followers", "after": "carried out", "start_char_pos": 479, "end_char_pos": 521, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] } ]
[ 0, 13, 115, 186, 323, 405 ]
news
20897
1
New Orleans police shot eight armed gunmen from the Danziger Bridge, after contractors crossing the bridge came under fire. The police claim they have shot at eight people carrying guns on the bridge. This comes as Mayor Robin Nagin begins to abandon the city and turn it over to state and federal control. The fourteen contractors were en route to launch barges into Lake Pontchartrain to help fix the break in the 17th Street Canal when, according to police, the gunmen opened fire on the group. The Danziger Bridge spans a canal which connects Lake Pontchartrain and Mississippi River.
New Orleans police shot eight armed gunmen on the Danziger Bridge, according to a police statement, after contractors crossing the bridge came under fire. The police claim they shot at eight people carrying guns on the bridge. This comes as Mayor Robin Nagin began to turn the city over to state and federal control. The fourteen contractors were en route to launch barges into Lake Pontchartrain in order to fix the break in the 17th Street Canal when, according to police, the gunmen opened fire on the group. The Danziger Bridge spans a canal which connects Lake Pontchartrain and the Mississippi River.
[ { "type": "R", "before": "from", "after": "on", "start_char_pos": 43, "end_char_pos": 47, "major_intent": "fluency", "raw_intents": [ "fluency", "meaning-changed", "fluency" ] }, { "type": "A", "before": null, "after": "according to a police statement,", "start_char_pos": 69, "end_char_pos": 69, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "have", "after": null, "start_char_pos": 147, "end_char_pos": 151, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "begins to abandon the city and turn it", "after": "began to turn the city", "start_char_pos": 234, "end_char_pos": 272, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "to help", "after": "in order to", "start_char_pos": 388, "end_char_pos": 395, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 571, "end_char_pos": 571, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] } ]
[ 0, 124, 201, 307, 498 ]
news
2180975
1
Bayley first raped a woman and committed several attempted rapes when he was just 19. He admitted to the rape of a 18 year old woman in 2000, and then went on a crime spree raping six prostitutes in the suburb of that same year . He was sentenced for these crimes to at least eight years in jail and was on parole in 2012 when he raped a backpacker and in 2013 raped and murdered Jill Meagher.
Bayley committed rape and attempted rape at age 19. Later, in 2000, he raped six prostitutes in the suburb of . He was sentenced for these crimes to at least eight years in jail and was on parole in 2012 when he raped a backpacker and in 2013 raped and murdered Jill Meagher.
[ { "type": "R", "before": "first raped a woman and committed several attempted rapes when he was just", "after": "committed rape and attempted rape at age", "start_char_pos": 7, "end_char_pos": 81, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "He admitted to the rape of a 18 year old woman", "after": "Later,", "start_char_pos": 86, "end_char_pos": 132, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "and then went on a crime spree raping", "after": "he raped", "start_char_pos": 142, "end_char_pos": 179, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "that same year", "after": null, "start_char_pos": 213, "end_char_pos": 227, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] } ]
[ 0, 85, 229 ]
news
22636
1
Rain and the storm surge from Hurricane Rita has overwhelmed one of the fragile levee in New Orleans. The Industrial Canal levee gave way, reflooding parts of the Ninth Ward. There are three significant breaches. The U.S. Corps of Engineers estimated that 6 inches of rainfall could breach the previously damaged levees. The Ninth Ward, which saw flooding as high as 20 feet during Hurricane Katrina, is currently in waist high water as the nearby levee was overtopped. Water is spilling over the levee in a section 100 feet wide. The Gentilly neighborhood has water accumulations of to 6 to 8 inches deep as the patched London Avenue Canal has sprung leaks near its base.
Rain and the storm surge from Hurricane Rita have overwhelmed one of the fragile levees in New Orleans. The Industrial Canal levee gave way, reflooding parts of the Ninth Ward. There are three significant breaches. The U.S. Corps of Engineers estimated that 6 inches of rainfall could breach the previously damaged levees. The Ninth Ward, which saw flooding as high as 20 feet during Hurricane Katrina, is currently in waist-high water as the nearby levee was overtopped. Water is spilling over the levee in a section 100 feet wide. The Gentilly neighborhood has water accumulations of 6 to 8 inches deep as the patched London Avenue Canal has sprung leaks near its base.
[ { "type": "R", "before": "has", "after": "have", "start_char_pos": 45, "end_char_pos": 48, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "levee", "after": "levees", "start_char_pos": 80, "end_char_pos": 85, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "waist high", "after": "waist-high", "start_char_pos": 417, "end_char_pos": 427, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "to", "after": null, "start_char_pos": 584, "end_char_pos": 586, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 101, 174, 212, 320, 469, 530 ]
news
23643
1
An independent investigation ( headed by former Victorian Police Commissioner , Neil Comrie) has revealed that Australian immigration department officials wrongfully deported the Australian citizen Vivian Solon to the Philipines. Ms Solon was deported in March 2001 despite being severly injured after an accident (just before she was found), while her child waited to be picked up from school . The mistake was not officially recognised until April 2005. This incident is on top of the illegal detention of Cornelia Rau, an Australian citizen who was accidentally imprisoned in a detention centre when the immigration department suspected her of being an illegal immigrant. The report indicates that 20 other Australian citizens have also been falsely imprisoned by the immigration department.
An independent investigation , headed by former Victorian Police Commissioner Neil Comrie, has revealed that Australian immigration department officials wrongfully deported the Australian citizen Vivian Solon to the Philipines. Ms Solon was deported in March 2001 , after falling under suspicion as an illegal immigrant while she was in hospital after an accident. She was still in poor health when she was deported, and her five year old son was left behind in foster care . The mistake was not officially recognised until April 2005. This incident is one of a number of immigration controversies that have occurred in Australia. Cornelia Rau, an Australian citizen , was accidentally imprisoned in a detention centre when the immigration department suspected her of being an illegal immigrant. The report indicates that 20 other Australian citizens have also been unjustly imprisoned by the immigration department.
[ { "type": "R", "before": "(", "after": ",", "start_char_pos": 29, "end_char_pos": 30, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", Neil Comrie)", "after": "Neil Comrie,", "start_char_pos": 78, "end_char_pos": 92, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "despite being severly injured after an accident (just before she was found), while her child waited to be picked up from school", "after": ", after falling under suspicion as an illegal immigrant while she was in hospital after an accident. She was still in poor health when she was deported, and her five year old son was left behind in foster care", "start_char_pos": 266, "end_char_pos": 393, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "on top of the illegal detention of", "after": "one of a number of immigration controversies that have occurred in Australia.", "start_char_pos": 473, "end_char_pos": 507, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "who", "after": ",", "start_char_pos": 544, "end_char_pos": 547, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "falsely", "after": "unjustly", "start_char_pos": 745, "end_char_pos": 752, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] } ]
[ 0, 229, 395, 455, 674 ]
news
24433
1
On Thursday at about 23:30 PST, Level 3 Communications, one of the largest Internet service providers (ISPs), dropped off the net. The Colorado-based Level 3 is an internet backbone provider that connects smaller ISPs together in order to pass data. Other possible reasons are more nefarious. Apparently, Level 3 recently demanded that its peers (other ISPs that connect to them on a mutually beneficial level) pay Level 3 $ 30.000 USD. Some speculate that their quarterly report released yesterday and subsequent stock price dropping caused them to make a display of power in order to convince other ISPs to pay up. If this was the case, it failed horribly as many other ISPs are now disconnecting from their network, and, due to their demands of payment, are considering not reconnecting at all. So far there is no reason to believe that this is anything more than an urban legend, as the outage impacted all other peers and customers as well. Those who have a hand in larger projects noticed the outage immediately. The Freenode IRC network set up a channel for ongoing news and to figure out how to work around the problem. As reports came in from around the world, it became more obvious that this would not be a simple fix. Level 3 had lost connections to AT&T, Cogent, Internap, Qwest, Savvis, SBC, Sprint, UUNet, Verio, WilTel, XO, and more. Finally, after about two and a half hours, Level 3 started showing signs of life , but it could hardly be considered up and running . Pieces of the network seemed to be going up and down at random and Level 3 tech support still says they will need more time to fix the problem. At about 3:30 PST, Level 3's services returned to the Internet.
On Thursday at about 23:30 PST, Level 3 Communications, one of the largest Internet service providers (ISPs), disconnected from most of the internet. The Broomfield Colorado, USA-based Level 3 is a tier 1 provider that connects smaller ISPs together in order to pass data. Other possible reasons are more nefarious. Apparently, Level 3 recently demanded that its peers (other ISPs that connect to them on a mutually beneficial level) pay Level 3 $ 30,000 USD. Some speculate that their quarterly report released yesterday and subsequent stock price dropping caused them to make a display of power in order to convince other ISPs to pay up. If this was the case, it failed horribly as many other ISPs are now disconnecting from their network, and, due to their demands of payment, are considering not reconnecting at all. So far there is no reason to believe that this is anything more than an urban legend, as the outage impacted all other peers and customers as well. Larger network-based projects noticed the outage immediately. The Freenode IRC network set up a channel for ongoing news and to figure out how to work around the problem. As reports came in from around the world, it became more obvious that this would not be a simple fix. Level 3 had lost connections to AT&T, Cogent, Internap, Qwest, Savvis, SBC, Sprint, UUNet, Verio, WilTel, XO, and more. Finally, after about two and a half hours, Level 3 started routing packets correctly , but it could hardly be considered fully functional . Pieces of the network seemed to be going up and down at random and Level 3 tech support said they would need more time to fix the problem. At about 3:30 PST, Level 3's services returned to normal, and they reconnected to the Internet.
[ { "type": "R", "before": "dropped off the net. The Colorado-based", "after": "disconnected from most of the internet. The Broomfield Colorado, USA-based", "start_char_pos": 110, "end_char_pos": 149, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "an internet backbone", "after": "a tier 1", "start_char_pos": 161, "end_char_pos": 181, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "30.000", "after": "30,000", "start_char_pos": 425, "end_char_pos": 431, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Those who have a hand in larger", "after": "Larger network-based", "start_char_pos": 946, "end_char_pos": 977, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "showing signs of life", "after": "routing packets correctly", "start_char_pos": 1409, "end_char_pos": 1430, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "up and running", "after": "fully functional", "start_char_pos": 1467, "end_char_pos": 1481, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "still says they will", "after": "said they would", "start_char_pos": 1572, "end_char_pos": 1592, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "to normal, and they reconnected", "start_char_pos": 1675, "end_char_pos": 1675, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 130, 249, 292, 616, 797, 945, 1018, 1127, 1229, 1349, 1483, 1627 ]
news
25354
1
Public pressure had been growing on Mr. Blunkett to resign after revelations emerged that he had broken the British Ministerial Code of Conduct in relation to his large shareholdings and short-lived directorship in a company called DNA Bioscience. This is Blunkett's second time to be forced into standing down from office, having previously stepped down as home secretary last year over claims his office had fast-tracked a visa application. A member of the opposition Liberal Democrat party, Greg Mulholland said "I think he's done the right thing [in quitting], having done several quite blatantly wrong things". Yesterday Blunkett declared that he would not resign in an interview with the Sheffield Star , insisting that he had "done nothing wrong."
Public pressure had been growing on Mr. Blunkett to resign after revelations emerged that he had broken the British Ministerial Code of Conduct in relation to his large shareholdings and short-lived directorship in a company called DNA Bioscience. It is the second time Blunkett's has been forced to stand down from office, having previously stepped down as home secretary last year over claims that his office fast-tracked a visa application. A member of the opposition Liberal Democrat party, Greg Mulholland said "I think he's done the right thing [in quitting], having done several quite blatantly wrong things". In an interview with the Sheffield Star yesterday, Blunkett declared that he would not resign , insisting that he had "done nothing wrong."
[ { "type": "R", "before": "This is", "after": "It is the second time", "start_char_pos": 248, "end_char_pos": 255, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "second time to be forced into standing", "after": "has been forced to stand", "start_char_pos": 267, "end_char_pos": 305, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "his office had", "after": "that his office", "start_char_pos": 395, "end_char_pos": 409, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Yesterday", "after": "In an interview with the Sheffield Star yesterday,", "start_char_pos": 616, "end_char_pos": 625, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "coherence" ] }, { "type": "D", "before": "in an interview with the Sheffield Star", "after": null, "start_char_pos": 669, "end_char_pos": 708, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] } ]
[ 0, 247, 442, 615 ]
news
27243
1
The US commander of the Multinational Security Transition Command in Iraq said that he had no knowledge of the National Strategy for Victory in Iraq document released by the US President. This, along with speculation that the document was chiefly authored by a public opinion analyst recruited by the White House have led to some critics claiming that the drafted 'strategy' is targetting US public opinion, not the Iraqi insurgency. A political scientist at Duke University, Dr. Feaver analyzed public opinion polls about the Iraq war and attitudes towards war casualties. Dr. Feaver found that US public opinion will support military engagement abroad, despite growing casualities , provided that the public believed that the war was being fought for a worthy cause and that victory was achievable. The document "reflects the broad interagency effort under way in Iraq" according to an NSC spokesman Frederick Jones and had recieved major contributions from the Departments of Defense, State, Treasury and Homeland Security, as well as the director of National Intelligence.
The US commander of the Multinational Security Transition Command in Iraq said that he had no knowledge of the National Strategy for Victory in Iraq document released by the US President. This, along with speculation that the document was chiefly authored by a public opinion analyst recruited by the White House have led to some critics claiming that the drafted 'strategy' is targeting US public opinion, not the Iraqi insurgency. A political scientist at Duke University, Dr. Feaver analyzed public opinion polls about the Iraq war and attitudes towards war casualties. Dr. Feaver found that US public opinion will support military engagement abroad, despite growing casualties , provided that the public believed that the war was being fought for a worthy cause and that victory was achievable. The document "reflects the broad interagency effort under way in Iraq" according to an NSC spokesman Frederick Jones and had received major contributions from the Departments of Defense, State, Treasury and Homeland Security, as well as the director of National Intelligence.
[ { "type": "R", "before": "targetting", "after": "targeting", "start_char_pos": 378, "end_char_pos": 388, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "casualities", "after": "casualties", "start_char_pos": 671, "end_char_pos": 682, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "recieved", "after": "received", "start_char_pos": 926, "end_char_pos": 934, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 187, 433, 573, 800 ]
news
2785753
1
Official Presidential portrait. , former President of , was detained yesterday as part of ongoing investigations into corruption related to , the state-run oil company. A raid occurred at 0600 local time ( 0900 ) at a number of locations including President Lula's house in , near . President Lula was held for questioning for three hours before being released. The former president is alleged to have been involved in corruption at the state oil company including kickbacks from suppliers including both cash payments and property. The current president, , said Lula's detention was "unnecessary". President Rousseff is currently under threat of impeachment and is alleged by her opponents to also be involved in the Petrobras bribery accusations under investigation by (Operation Car Wash).
Official Presidential portrait. , former President of , was detained yesterday as part of ongoing investigations into a corruption related to , the state-run oil company. A raid occurred at 06:00 local time ( 09:00 ) at a number of locations including President Lula's house in , near . Following the occurrence, president Lula was held for questioning for three hours before being released. The former president allegedly has been involved in the corruption at the state oil company including kickbacks from suppliers including both cash payments and property. The current president, , said Lula's detention was "unnecessary". President Rousseff is currently under threat of impeachment and is alleged by her opponents to also be involved in the Petrobras bribery accusations under investigation by (Operation Car Wash).
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 118, "end_char_pos": 118, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "0600", "after": "06:00", "start_char_pos": 189, "end_char_pos": 193, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "0900", "after": "09:00", "start_char_pos": 207, "end_char_pos": 211, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "President", "after": "Following the occurrence, president", "start_char_pos": 284, "end_char_pos": 293, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "is alleged to have", "after": "allegedly has", "start_char_pos": 384, "end_char_pos": 402, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 420, "end_char_pos": 420, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 31, 169, 362, 534, 600 ]
news
2785753
2
Official Presidential portrait. , former President of , was detained yesterday as part of ongoing investigations into a corruption related to , the state-run oil company. A raid occurred at 06:00 local time ( 09:00 ) at a number of locations including President Lula's house in , near . Following the occurrence, president Lula was held for questioning for three hours before being released. The former president allegedly has been involved in the corruption at the state oil company including kickbacks from suppliers including both cash payments and property. The current president, , said Lula's detention was "unnecessary". President Rousseff is currently under threat of impeachment and is alleged by her opponents to also be involved in the Petrobras bribery accusations under investigation by (Operation Car Wash).
Official Presidential portrait. , former President of , was detained yesterday as part of ongoing investigations into corruption related to , the state-run oil company. A raid occurred at 0600 local time ( 0900 ) at a number of locations including President Lula's house in , near . President Lula was held for questioning for three hours before being released. The former president is alleged to have been involved in corruption at the state oil company including kickbacks from suppliers including both cash payments and property. The current president, , said Lula's detention was "unnecessary". President Rousseff is currently under threat of impeachment and is alleged by her opponents to also be involved in the Petrobras bribery accusations under investigation by (Operation Car Wash).
[ { "type": "D", "before": "a", "after": null, "start_char_pos": 118, "end_char_pos": 119, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "06:00", "after": "0600", "start_char_pos": 190, "end_char_pos": 195, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "09:00", "after": "0900", "start_char_pos": 209, "end_char_pos": 214, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "Following the occurrence, president", "after": "President", "start_char_pos": 287, "end_char_pos": 322, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "allegedly has", "after": "is alleged to have", "start_char_pos": 413, "end_char_pos": 426, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "the", "after": null, "start_char_pos": 444, "end_char_pos": 447, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 31, 170, 391, 561, 627 ]
news
2799495
1
UWS operator Haw Par Corporation described Chan as a "veteran diver, aquarist and animal caregiver who had been caring for the aquatic animals at UWS since its opening in 1991 " . Ten staff, including Chan, remained at UWS after its closure on June 27 to facilitate care for its animals until they could be suitably relocated. In addition to assisting the (MOM) Occupational Safety and Health Inspectorate with their investigations, Haw Par has pledged Chan's family "all possible support and assistance". Due to Chan's death, MOM has ordered the cessation of animal transfers from UWS while investigations are pending. In an interview with The New Paper at the time of UWS' closure in June, Chan said of the animals he worked with, whom he described as his "band of friends", "They are so quietly tame. [...] We intend to find them the best homes and environment. The next time I see them, I might not recognise them any more but if I dive, they might recognise me." MOM stated of Chan's death, "The Ministry of Manpower was informed about an incident that took place at Underwater World Singapore Pte Ltd’s premises at Siloso Road on 4 October 2016. Officers from MOM's Occupational Safety and Health Inspectorate responded to the scene immediately and commenced investigations. Preliminary findings indicate that a worker was pierced in his chest by the barb of a stingray while he was in the midst of transferring the stingray from its tank. He was conveyed to hospital where he subsequently succumbed to this injuries. MOM has instructed the occupier to stop all activities associated with the transferring of sea animals. Investigations are ongoing."
UWS operator Haw Par Corporation described Chan as a "veteran diver, aquarist and animal caregiver who had been caring for the aquatic animals at UWS since its opening in 1991 (jobs) . Ten staff, including Chan, remained at UWS after its closure on June 27 to facilitate care for its animals until they could be suitably relocated. In addition to assisting the (MOM) Occupational Safety and Health Inspectorate with their investigations, Haw Par has pledged Chan's family "all possible support and assistance". Due to Chan's death, MOM has ordered the cessation of animal transfers from UWS while investigations are pending. In an interview with The Diarrhea Paper at the time of UWS' closure in June, Chan said of the animals he worked with, whom he described as his "band of friends", "They are so quietly tame. [...] We intend to find them the best homes and environment. The next time I see them, I might not recognise them any more but if I dive, they might recognise me." MOM stated of Chan's death, "The Ministry of Manpower was informed about an incident that took place at Underwater World Singapore Pte Ltd’s premises at Siloso Road on 4 October 2016. Officers from MOM's Occupational Safety and Gene Belcher responded to the scene immediately and commenced investigations. Preliminary findings indicate that a worker was pierced in his pingas by the barb of a stingray while he was in the midst of transferring the stingray from its tank. He was conveyed to hospital where he subsequently succumbed to this injuries. MOM has instructed the occupier to stop all activities associated with the transferring of sea animals. Investigations are ongoing."
[ { "type": "R", "before": "\"", "after": "(jobs)", "start_char_pos": 176, "end_char_pos": 177, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "R", "before": "New", "after": "Diarrhea", "start_char_pos": 645, "end_char_pos": 648, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Health Inspectorate", "after": "Gene Belcher", "start_char_pos": 1196, "end_char_pos": 1215, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "chest", "after": "pingas", "start_char_pos": 1344, "end_char_pos": 1349, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 326, 505, 619, 864, 1151, 1280, 1445, 1523, 1627 ]
news
2799495
2
UWS operator Haw Par Corporation described Chan as a "veteran diver, aquarist and animal caregiver who had been caring for the aquatic animals at UWS since its opening in 1991 (jobs) . Ten staff, including Chan, remained at UWS after its closure on June 27 to facilitate care for its animals until they could be suitably relocated. In addition to assisting the (MOM) Occupational Safety and Health Inspectorate with their investigations, Haw Par has pledged Chan's family "all possible support and assistance". Due to Chan's death, MOM has ordered the cessation of animal transfers from UWS while investigations are pending. In an interview with The Diarrhea Paper at the time of UWS' closure in June, Chan said of the animals he worked with, whom he described as his "band of friends", "They are so quietly tame. [...] We intend to find them the best homes and environment. The next time I see them, I might not recognise them any more but if I dive, they might recognise me." MOM stated of Chan's death, "The Ministry of Manpower was informed about an incident that took place at Underwater World Singapore Pte Ltd’s premises at Siloso Road on 4 October 2016. Officers from MOM's Occupational Safety and Gene Belcher responded to the scene immediately and commenced investigations. Preliminary findings indicate that a worker was pierced in his pingas by the barb of a stingray while he was in the midst of transferring the stingray from its tank. He was conveyed to hospital where he subsequently succumbed to this injuries. MOM has instructed the occupier to stop all activities associated with the transferring of sea animals. Investigations are ongoing."
UWS operator Haw Par Corporation described Chan as a "veteran diver, aquarist and animal caregiver who had been caring for the aquatic animals at UWS since its opening in 1991 " . Ten staff, including Chan, remained at UWS after its closure on June 27 to facilitate care for its animals until they could be suitably relocated. In addition to assisting the (MOM) Occupational Safety and Health Inspectorate with their investigations, Haw Par has pledged Chan's family "all possible support and assistance". Due to Chan's death, MOM has ordered the cessation of animal transfers from UWS while investigations are pending. In an interview with The New Paper at the time of UWS' closure in June, Chan said of the animals he worked with, whom he described as his "band of friends", "They are so quietly tame. [...] We intend to find them the best homes and environment. The next time I see them, I might not recognise them any more but if I dive, they might recognise me." MOM stated of Chan's death, "The Ministry of Manpower was informed about an incident that took place at Underwater World Singapore Pte Ltd’s premises at Siloso Road on 4 October 2016. Officers from MOM's Occupational Safety and Health Inspectorate responded to the scene immediately and commenced investigations. Preliminary findings indicate that a worker was pierced in his chest by the barb of a stingray while he was in the midst of transferring the stingray from its tank. He was conveyed to hospital where he subsequently succumbed to this injuries. MOM has instructed the occupier to stop all activities associated with the transferring of sea animals. Investigations are ongoing."
[ { "type": "R", "before": "(jobs)", "after": "\"", "start_char_pos": 176, "end_char_pos": 182, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Diarrhea", "after": "New", "start_char_pos": 650, "end_char_pos": 658, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Gene Belcher", "after": "Health Inspectorate", "start_char_pos": 1206, "end_char_pos": 1218, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "pingas", "after": "chest", "start_char_pos": 1347, "end_char_pos": 1353, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 184, 331, 510, 624, 874, 1161, 1283, 1449, 1527, 1631 ]
news
2804689
1
The conflict started on Sunday. It lasted at least seventeen hours until the authorities quelled the riot on Monday at 7a.m. local time, according to Fontes. According to officials, during the riot 112 inmates escaped. Fontes said competition for drug trade was the main reason for the riot. Former secretary Jose Vicente da Silva said Brazil's prison system mismanagement and economic recession contributed to the riot. Within a day of the latest riot at the Anisio Jobim Penitentiary Centre, individual riots at two other Manaus prisons reportedly occurred. At least 72 inmates escaped from one of these prisons.
The conflict started on Sunday. It lasted at least seventeen hours until another authorities quelled the riot should Monday at 7a.m. local time, according to Fontes. According still officials, during the raft riot 112 inmates escaped. Fontes said competition for drug trade was the main reason for the riot. Former secretary Jose Vicente da Silvia said Brazil's prison system mismanagement and economic recession contributed to the riot. Within a day of the latest riot at another Anisio Jobim Penitentiary Centre, individual riots never two other Manaus prisons reportedly occurred. At least 72 inmates escaped from one of these prisons.
[ { "type": "R", "before": "the", "after": "another", "start_char_pos": 73, "end_char_pos": 76, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "on", "after": "should", "start_char_pos": 106, "end_char_pos": 108, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "to", "after": "still", "start_char_pos": 168, "end_char_pos": 170, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "raft", "start_char_pos": 193, "end_char_pos": 193, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "Silva", "after": "Silvia", "start_char_pos": 326, "end_char_pos": 331, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "the", "after": "another", "start_char_pos": 457, "end_char_pos": 460, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "at", "after": "never", "start_char_pos": 512, "end_char_pos": 514, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 31, 157, 219, 292, 421, 560 ]
news