ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:36.527733Z"
},
"title": "CYUT Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2020 CGED Shared Task",
"authors": [
{
"first": "Shih-Hung",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chaoyang University of Technology",
"location": {
"settlement": "Taichung",
"country": "Taiwan, R.O.C"
}
},
"email": "shwu@cyut.edu.tw"
},
{
"first": "Jun-Wei",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chaoyang University of Technology",
"location": {
"settlement": "Taichung",
"country": "Taiwan, R.O.C"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports our Chinese Grammatical Error Diagnosis system in the NLPTEA-2020 CGED shared task. In 2020, we sent two Runs with two approaches. The first one is a combination of conditional random fields (CRF) and a BERT model deeplearning approach. The second one is CRF approach. The official test results shows that our Run1 achieved the highest precision rate 0.9875 with the lowest false positive rate 0.0163 on detection, while Run2 gives a more balanced performance.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports our Chinese Grammatical Error Diagnosis system in the NLPTEA-2020 CGED shared task. In 2020, we sent two Runs with two approaches. The first one is a combination of conditional random fields (CRF) and a BERT model deeplearning approach. The second one is CRF approach. The official test results shows that our Run1 achieved the highest precision rate 0.9875 with the lowest false positive rate 0.0163 on detection, while Run2 gives a more balanced performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Learning Chinese is very popular for foreigners, but it is difficult for them to write correct sentence. Grammatical error detection is a big challenge for the Chinese learners as a second language. Learning Chinese sentences will rely too much on the teacher to correct the wrong sentences. It is not easy for learners to get timely feedback. Therefore, how to use existing technology to detect and correct the grammatical errors that learners make has become a hot topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since 2014 (Yu et al.,2014) (Lee et al. 2015 ) (Lee et al. 2016) (Rao et al., 2017) (Rao et al., 2018) , the NLP-TEA workshop provides a series Chinese Grammar Error Detection (CGED) shared tasks to promote the research on grammar error diagnosis. The organizers ask professional teachers to label the errors in learners' sentences. There are four types of label in the sentences: Redundant (R), Selection (S), Disorder (W), and Missing (M). The goal of the task is to build a system that can predict whether a sentence is wrong and correct it. In previous years, we participated in the NLPTEA CGED (Wu et al., 1 https://lang-8.com/ 2018) and shows that such a system can be precision oriented or recall oriented for different users.",
"cite_spans": [
{
"start": 11,
"end": 27,
"text": "(Yu et al.,2014)",
"ref_id": "BIBREF25"
},
{
"start": 28,
"end": 44,
"text": "(Lee et al. 2015",
"ref_id": "BIBREF10"
},
{
"start": 47,
"end": 64,
"text": "(Lee et al. 2016)",
"ref_id": "BIBREF9"
},
{
"start": 65,
"end": 83,
"text": "(Rao et al., 2017)",
"ref_id": null
},
{
"start": 84,
"end": 102,
"text": "(Rao et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 611,
"end": 612,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the emerging of deep learning, we find that sequence-to-sequence models have good effect on grammar correction, and the BERT model (Devlin et al., 2019; Xu et al., 2019) is the best sequence-to-sequence pre-training language model using a large number of data sets. The pretrained model is trained with mask language model (MLM) to enhance the strength of the model. In Run1 of 2020, we use BERT as the first level of our identification. We fine-tuning the BERT model with the Lang-8 1 corpus and all the data from NLPTEA since 2016 to 2020, so that the model can be used to predict correct and incorrect sentences, and reproduce the wrong sentences. The error types are determined by CRF. In Run2 is not used to determine the wrong and correct sentences. In the following sections, we will introduce related work and our approaches, then discuss the formal test results, and give conclusion and future works.",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(Devlin et al., 2019;",
"ref_id": null
},
{
"start": 159,
"end": 175,
"text": "Xu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Grammar error detection and correction is now a popular research topic in natural language processing (Li et al., 2018; Fu et al., 2018) .",
"cite_spans": [
{
"start": 102,
"end": 119,
"text": "(Li et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 120,
"end": 136,
"text": "Fu et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous works show that CRF model can be used to integrate various features to build a good system. Better results can be achieved by using the precollected collocation word database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, researchers use deep learning models to solve this issue. The most common models are sequence-to-sequence (Ge et al., 2018) and convolutional neural network (Li et al., 2019) models. The idea of sequence-to-sequence is to translate the wrong into correct sentences just like translation between two languages. A corrected sentence is generated from the wrong sentence, and it is believed that multiple revisions will give better results. The convolutional neural network originally is used to process images, now shifted to process text. With the two-dimension processing power, it is easier for the model to read the context of the text.",
"cite_spans": [
{
"start": 116,
"end": 133,
"text": "(Ge et al., 2018)",
"ref_id": null
},
{
"start": 167,
"end": 184,
"text": "(Li et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since 2019, the BERT gives many state-ofthe-art results on several NLP applications. This shows the great influence of BERT on natural language processing. Spelling check is a similar task to the grammar correction (Cheng et al., 2020; Zhang et al., 2020) . The authors use the BERT internal model to find typos. Although its effect is not the best, it achieves the purpose the author wants.",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "(Cheng et al., 2020;",
"ref_id": null
},
{
"start": 236,
"end": 255,
"text": "Zhang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This year, we mainly focused on minimizing the false alarm on error detection. Since the system is to help foreign learners, we hope that less errors judged by the model will not cause learners to feel frustration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "BERT is a pre-trained language model. Since the original pre-training model was not trained for Chinese grammar correction, we have to train it with our corpus. For different task, better results can be achieved by fine-tuning the pre-trained language models with additional training corpus. Moreover, BERT has achieved excellent results in various projects, such as single classification tasks, sentence-labelling tasks, and question answering tasks. In addition to the BERT model, we also use conditional random fields (CRF) to double check tine wrong sentences detected by BERT, and select the type and location of the errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We use the BERT pre-training language model provided by huggingface 2 . The pre-train model is \"bert-base-chinese\". The fine-tuning training data set is the Lang-8 data set provided by NLPCC and all the training data and test data from 2016 to 2020 provided by NLPTEA excepting 2020 test data. Figure 1 shows the BERT fine-tune system architecture. The data set {Sentence _1, Sentence _2, \u2026, Sentence _n} has been preprocessed. Our system compares the original sentence and the modified sentence from the data set. If the sentence is wrong, mark it as \"Error_ Sentence \", otherwise mark it as \"Correct_ Sentence \". Given a source token = {T1, T2, ..., Tn} with its segment = {S1, S2, ..., Sn} and position = {P1, P2, ..., Pn}, we can fine-tune the BERT and obtain the classify results. After classifying the correct and error sentences, the next step the error sentences need input the Conditional Random Fields (CRF). Table 1 shows the number of wrong and correct sentences and their average length in the fine-tuning data set.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 1",
"ref_id": null
},
{
"start": 919,
"end": 926,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fine-tune Language model",
"sec_num": "3.1"
},
{
"text": "We use CRF model in both two Runs. Run1 uses a pre-trained language model + CRF and Run2 uses only CRF. We want to see what changes will happen if the pre-trained language model is added. CRF is used to mark the error type and location.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2"
},
{
"text": "CRF is regarded as a sequence label model. As show in Figure 2 , the model will be trained according to the sequence label S we provide, and Table 1 : Fine-tuning data set statistics Figure 1 Fine-tune system architecture the trained model is used to predict the corresponding sequence label Y. The sequence tags we provide to the model contain the words and parts of speech that have been hyphenated by Jieba. The part of speech (POS) can have a very good effect during training. In the first column of the sequence, we place the hyphenated words and the second column. The part of speech of the word, and the label of the wrong type { T1, T2, \u2026., Tn} and the position of the word {P1, P2, \u2026. , Pn}. Finally, the error type and location are transformed into the format specified by the seminar Figure 3 shows the pre-processing flowchart. The Lang-8 and NLPTEA data are used for fine-tuning the pre-train language model. The sentence before correction must be regarded as an error and the sentence after correction is correct. When preparing the dataset for CRF, our system compares the Lang-8 sentences before and after correction using Jieba segmentation and edit distance. The differences between the two sentences will then be used to determine the three different error types and positions within the edit distance. With the help of Jieba, our system can extract the words in the original sentence and obtain the part-of-speech (POS) tag. The error types include redundant words (R), word selection errors (S), and missing words (M). Next we use the three methods in edit distance. In these methods, insert means missing words, delete means redundant words, and replace means word selection errors. Calculate the position of the wrong word through three ways of editing distance. We bypass the word ordering errors (W) here because it is very difficult and the training data is too little. The different training materials of NLPTEA and Lang-8 have been marked with error types and positions.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 2",
"ref_id": null
},
{
"start": 141,
"end": 148,
"text": "Table 1",
"ref_id": null
},
{
"start": 183,
"end": 191,
"text": "Figure 1",
"ref_id": null
},
{
"start": 795,
"end": 803,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "3.2"
},
{
"text": "During CRF training, because using too much training data will lead to poor training results, only 57,386 error sentences are used during training. Each sentence will be processed through Jieba 3 for word segmentation, and finally the corresponding type will be placed in the corresponding position. Table 2 and Table 3 shows the official results of our system in CGED 2020 shared task. The result shows that our Run1 achieved the highest precision rate 0.9875 with the best false positive rate 0.0163 on detection. In Run2 we improved the recall greatly from 0.3443 to 0.6296 with a drop at the precision rate from 0.9875 to 0.8117. The trade-off of precision and recall is still obvious.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 319,
"text": "Table 2 and Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "3.3"
},
{
"text": "When encountering sentences that are too long. Our model cannot predict the correct result very well. Here we think that in the fine-tuned training set, it can be seen that the average length is only 20-21. However, as shown in Table 5 , the average length of sentences judged by BERT are all above 38 and only a few are below 38. So in the future, we will try to use GPT2 or GPT3 to detect errors in long sentences. Table 4 shows some examples that includes errors but our BERT system fails to detect. As shown in Table 6 , we can see the number of errors for the three types of errors. The most numerous are all dependent on one word. Error types R and S almost have similar errors including \"\u7684 \", \"\u662f\" and \"\u4e86\" and so on. The error type M is mostly punctuation. Because most people usually filter out punctuation because of the convenience of training. Punctuation can make a bad article easier to read. In the 3 https://github.com/fxsjy/jieba future work, we will modify the model towards the above problems.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 417,
"end": 424,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 515,
"end": 522,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.2"
},
{
"text": "In 2020 NLP-TEA CGED shared task, we submitted two Runs, the result shows that our Run1 achieved the highest precision rate (0.9875) with the lowest false positive rate (0.0163) on error detection. The result shows that our system can point out errors with very a high confidence. With very low false alarm, the system can help learners to notice that they really make a grammar error. However, the recall rate of our system is only not very high in Run1. In Run2 we improved the recall greatly from 0.3443 to 0.6296 with a drop at the precision rate from 0.9875 to 0.8117. The trade-off of precision and recall needs more attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5."
},
{
"text": "In the future, we will combine the methods of BERT and GPT2 to improve sentences that our current system cannot detect effectively. About the correction level, we also hope to filter out the best alternative words through GPT2's sentence rewriting method. Table 6 . NlPTEA CGED 2016 -2020 Testset, the most frequent errors in each error type",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5."
},
{
"text": "https://huggingface.co/transformers/model_doc/bart.h tml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This study was supported by the Ministry of Science and Technology under the grant number MOST 109-2221-E-324-024.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "871--881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Qi, SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, p.p 871-881, July 5 -10, 2020.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Jiefu Gong; Wei Song; Dechuan Teng; Wanxiang Che; Shijin Wang",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu; Zhengqi",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiji Fu; Zhengqi Pei; Jiefu Gong; Wei Song; Dechuan Teng; Wanxiang Che; Shijin Wang; Guoping Hu;",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Chinese Grammatical Error Diagnosis using Statistical and Prior Knowledge driven Features with Probabilistic Ensemble Enhancement",
"authors": [
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting Liu, Chinese Grammatical Error Diagnosis using Statistical and Prior Knowledge driven Features with Probabilistic Ensemble Enhancement, in Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications, Melbourne, Australia, July 19, 2018.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fluency Boost Learning and Inference for Neural Grammatical Error Correction",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Zhou, Fluency Boost Learning and Inference for Neural Grammatical Error Correction, in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, p.p 1-11, Melbourne, Australia, July 15 -20, 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the NLP-TEA 2016 Shared Task for Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Gaoqi",
"middle": [],
"last": "Lung-Hao Lee",
"suffix": ""
},
{
"first": "Liang-Chih",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xun",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Endong",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'16)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lung-Hao Lee, Gaoqi Rao,Liang-Chih Yu, Li-Ping Chang, Xun Endong, Baolin Zhang, Li-Ping Chang. 2016. Overview of the NLP-TEA 2016 Shared Task for Chinese Grammatical Error Diagnosis. 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLP- TEA'16), Osaka, Japan.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis",
"authors": [
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'15)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lung-Hao Lee, Liang-Chih Yu, and Li-Ping Chang. 2015. Overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'15), pages 1-6, Beijing, China.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Hybrid System for Chinese Grammatical Error Diagnosis and Correction",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Li, A Hybrid System for Chinese Grammatical Error Diagnosis and Correction, in Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications, Melbourne, Australia, July 19, 2018.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Chinese Grammatical Error Correction Based on Convolutional Sequence to Sequence Model",
"authors": [
{
"first": "Zhiqing",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "72905--72913",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2019.2917631"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiqing Lin, Chinese Grammatical Error Correction Based on Convolutional Sequence to Sequence Model, IEEE Access., vol. 7, pp. 72905 -72913, May 17, 2019. doi: 10.1109/ACCESS.2019.2917631",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Gaoqi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaoqi Rao, Qi Gong, Baolin Zhang, Endong Xun, 2018. Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis, in Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications, Melbourne, Australia, July 19, 2018.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task",
"authors": [
{
"first": "; Ping-Che",
"middle": [],
"last": "Liang-Pu Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang-Pu Chen; Ping- Che Yang, CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task, in Proceedings of The 5th Workshop on Natural Language Processing Techniques for Educational Applications, Melbourne, Australia, July 19, 2018.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Training for Review Reading Comprehension and Aspect-based Sentiment Analysis",
"authors": [
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02232v2"
]
},
"num": null,
"urls": [],
"raw_text": "Philip S. Yu, BERT Post- Training for Review Reading Comprehension and Aspect-based Sentiment Analysis, arXiv:1904.02232v2,May 4, 2019.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Overview of Grammatical Error Diagnosis for Learning Chinese as a Foreign Language",
"authors": [
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'14)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang-Chih Yu, Lung-Hao Lee, and Li-Ping Chang (2014). Overview of Grammatical Error Diagnosis for Learning Chinese as a Foreign Language. Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'14), Nara, Japan, 30",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Spelling Error Correction with Soft-Masked BERT",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "882--890",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Li, Spelling Error Correction with Soft-Masked BERT, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 882- 890, July 5 -10, 2020.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Figure 3 CRF system architecture",
"type_str": "figure"
},
"TABREF2": {
"text": "Detection level in CGED 2020",
"html": null,
"content": "<table><tr><td>Submission</td><td>False Positive Rate (the lower the better)</td></tr><tr><td>Run1</td><td>0.0163</td></tr><tr><td>Run2</td><td>0.5472</td></tr><tr><td>Average of 43 runs</td><td>0.3920</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "False positive rate in CGED 2020",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Examples of long sentences with grammar errors in CGED 2020",
"html": null,
"content": "<table><tr><td># or Sentence</td><td>Average length</td></tr><tr><td>460</td><td>38</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "The sentence statistics of test set Gaoqi Rao, Baolin Zhang, Endong Xun. 2017. IJCNLP-2017 Task1: Chinese Grammatical Error Diagnosis. 8th International Joint Conference of Nature Language Processing (IJCNLP2017), Taipei, Taiwan.",
"html": null,
"content": "<table><tr><td>R</td><td>R_num</td><td>S</td><td>S_num</td><td>M</td><td>M_num</td></tr><tr><td>\u7684</td><td>433</td><td>\u7684</td><td>265</td><td>\u3002</td><td>362</td></tr><tr><td>\u4e86</td><td>308</td><td>\u800c</td><td>77</td><td>\uff0c</td><td>224</td></tr><tr><td>\u662f</td><td>198</td><td>\u4e2a</td><td>68</td><td>\u4e0d</td><td>144</td></tr><tr><td>\u5728</td><td>98</td><td>\u6709</td><td>56</td><td>\u7684</td><td>133</td></tr><tr><td>\u6709</td><td>82</td><td>\u505a</td><td>55</td><td>\u4e00</td><td>113</td></tr><tr><td>\u4e0a</td><td>63</td><td>\u5728</td><td>52</td><td>\u6211</td><td>96</td></tr><tr><td>\u4e5f</td><td>51</td><td>\u5f97</td><td>48</td><td>\u662f</td><td>75</td></tr><tr><td>\u6211</td><td>43</td><td>\u662f</td><td>44</td><td>\u6709</td><td>74</td></tr><tr><td>\u5bf9</td><td>42</td><td>\u5c0d</td><td>38</td><td>\u5f88</td><td>71</td></tr><tr><td>\u800c</td><td>42</td><td>\u4e5f</td><td>37</td><td>\u4eba</td><td>54</td></tr><tr><td>\u8981</td><td>39</td><td>\u5bf9</td><td>37</td><td>\u8fd9</td><td>46</td></tr><tr><td>\u4f1a</td><td>37</td><td>\u4e86</td><td>36</td><td>\u5728</td><td>45</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}