ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:38.811282Z"
},
"title": "Chinese Grammatical Error Diagnosis Based on RoBERTa-BiLSTM-CRF Model",
"authors": [
{
"first": "Yingjie",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou Henan",
"country": "China"
}
},
"email": ""
},
{
"first": "Yingjie",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou Henan",
"country": "China"
}
},
"email": "yjyan@gs.zzu.edu.cn"
},
{
"first": "Yangchao",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou Henan",
"country": "China"
}
},
"email": "hanyangchao@foxmail.com"
},
{
"first": "Rui",
"middle": [],
"last": "Chao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou Henan",
"country": "China"
}
},
"email": "zzuruichao@163.com"
},
{
"first": "Hongying",
"middle": [],
"last": "Zan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Zhengzhou University",
"location": {
"settlement": "Zhengzhou Henan",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Chinese Grammatical Error Diagnosis (CGED) is a natural language processing task for the NLPTEA6 workshop. The goal of this task is to automatically diagnose grammatical errors in Chinese sentences written by L2 learners. This paper proposes a RoBERTa-BiLSTM-CRF model to detect grammatical errors in sentences. Firstly, RoBERTa model is used to obtain word vectors. Secondly, word vectors are input into BiLSTM layer to learn context features. Last, CRF layer without hand-craft features work for processing the output by BiLSTM. The optimal global sequences are obtained according to state transition matrix of CRF and adjacent labels of training data. In experiments, the result of RoBERTa-CRF model and ERNIE-BiLSTM-CRF model are compared, and the impacts of parameters of the models and the testing datasets are analyzed. In terms of evaluation results, our recall score of RoBERTa-BiLSTM-CRF ranks fourth at the detection level.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Chinese Grammatical Error Diagnosis (CGED) is a natural language processing task for the NLPTEA6 workshop. The goal of this task is to automatically diagnose grammatical errors in Chinese sentences written by L2 learners. This paper proposes a RoBERTa-BiLSTM-CRF model to detect grammatical errors in sentences. Firstly, RoBERTa model is used to obtain word vectors. Secondly, word vectors are input into BiLSTM layer to learn context features. Last, CRF layer without hand-craft features work for processing the output by BiLSTM. The optimal global sequences are obtained according to state transition matrix of CRF and adjacent labels of training data. In experiments, the result of RoBERTa-CRF model and ERNIE-BiLSTM-CRF model are compared, and the impacts of parameters of the models and the testing datasets are analyzed. In terms of evaluation results, our recall score of RoBERTa-BiLSTM-CRF ranks fourth at the detection level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The number of foreigners learning Chinese is constantly increasing. Some foreign countries even regard Chinese as their second language. Learners of Chinese as a foreign language (CFL) may make grammatical errors in writing Chinese. And the goal of Chinese grammatical error diagnosis (CGED) shared task is to develop NLP techniques to automatically diagnose grammatical errors in Chinese sentences written by L2 learners. Such errors fall into four categories: redundant words (denoted as a capital \"R\"), missing words (\"M\"), word selection errors (\"S\"), and word ordering errors (\"W\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The criteria for judging correctness are determined at three levels as follows. (1) Detection-level: to distinguish whether a sentence contains grammatical errors; (2) Identificationlevel: to identify the types of those errors type; (3) Position-level: to detect positions where errors occur. The quality of diagnosis is measured by FPR (False Positive Rate), Pre (Precision), Rec (Recall), and F1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "CGED shared task has been held since 2014 (YUa et al.,2014) . In CGED of NLP-TEA 2018 (Rao et al.,2018) , deep learning models are widely used, LSTM-CRF has been a standard implementation (Fu et al.,2018; Zhou et al., 2018) . While, in recent years, pre-training models, such as BERT, XLNET, ERNIE(Sun et al.,2019) and RoBERTa (Liu et al., 2019) achieve good performance in various NLP tasks (Qiu et al.,2020) because of their fast convergence speed and less cost.",
"cite_spans": [
{
"start": 42,
"end": 59,
"text": "(YUa et al.,2014)",
"ref_id": null
},
{
"start": 86,
"end": 103,
"text": "(Rao et al.,2018)",
"ref_id": null
},
{
"start": 188,
"end": 204,
"text": "(Fu et al.,2018;",
"ref_id": "BIBREF0"
},
{
"start": 205,
"end": 223,
"text": "Zhou et al., 2018)",
"ref_id": null
},
{
"start": 285,
"end": 314,
"text": "XLNET, ERNIE(Sun et al.,2019)",
"ref_id": null
},
{
"start": 327,
"end": 345,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 392,
"end": 409,
"text": "(Qiu et al.,2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes a RoBERTa-BiLSTM-CRF model to detect grammatical errors. The model is described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) The RoBERTa model contains general domain data features and fine-tunes the CGED training data to obtain the corresponding word vectors. (2) The BiLSTM layer captures sentence-level features based on the powerful long-term memory ability, and CRF works for adjusting labels. The CRF layer only learns from word information without any handcraft features. (3) In this CGED shared task, our model is only used to detect grammatical errors but not correct them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We regard the CGED task as a sequence labeling task. The illustrative graph of RoBERTa-BiLSTM-CRF is shown in Figure 1 . Chinese characters are input into RoBERTa\uff0cand RoBERTa converts each character into a one-dimensional vector. Vector T1, T2, \u2026Tn fused with semantic features are output. The BiLSTM layer makes full use of the context information of the input sequence in the sequence labeling task so that it can predict label more accurately.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "The CRF layer fully considers the context correlation when predicting the label. More importantly, the Viterbi algorithm of CRF uses the dynamic programming method to find the path with the highest probability. Therefore, it fits better with the task of CGED and avoids illegal sequences, such as 'B-R' tag followed by 'I-R' tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "RoBERTa model can represent relationships between various words and extract important features in the text. The transformer structure of RoBERTa can get vector representations of sentences from inputting tokens. The RoBERTa model uses a dynamic mask strategy, the model will gradually adapt to different mask strategies processing continuous input data. Compared with training ERNIE model\uff0ctraining RoBERTa model needs larger data sizes and batches. Besides, RoBERTa-large has more network layers and a more complex structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RoBERTa model",
"sec_num": "2.1"
},
{
"text": "BiLSTM (Bidirectional Long-Short Term Memory) model is composed of a forward LSTM (Long-Short Term Memory) model and a backward LSTM model (Hochreiter et al.,1997) . Each word contains information from forward and backward at any time. LSTM model remembers or forgets previous information through the internal gate structure: forgetting gate, memory cell, input gate, and output gate. Figure 2 shows a basic unit of LSTM. 1) Forgetting gate as shown in formula (1) selects information to forget, in which h \u22121 indicates the previous moment; X indicates input words, and indicates the output of forgetting data.",
"cite_spans": [
{
"start": 139,
"end": 163,
"text": "(Hochreiter et al.,1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 385,
"end": 393,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= (W \u2022 [h \u22121 , X ] + )",
"eq_num": "(1)"
}
],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "2) Memory gate selects information to remember, as shown in formula (2), in which it indicates the output of the memory gate, and indicates the temporary cell's state shown in formula (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= ( \u2022 [\u210e \u22121 , ] + ) (2) = \u210e( \u2022 [\u210e \u22121 , ] + )",
"eq_num": "(3)"
}
],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "3) Memory cell records cell state Ct in the current moment, as shown in formula (4). The last cell state is \u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "= \u2022 \u22121 + \u2022 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "4) Formula (5) and (6) show output gate result and state of this moment \u210e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u2022 [\u210e \u22121 , ] + (5) \u210e = \u2022 tanh ( )",
"eq_num": "(6)"
}
],
"section": "BiLSTM layer",
"sec_num": "2.2"
},
{
"text": "The last layer CRF (conditional random field) (Lafferty et al., 2001 ) are used to learn an optimal path (Liu et al., 2018) . The output dimension of the Bi-LSTM layer is tag size, and the score of input sequence corresponds to the output tag sequence is defined as formula (7).",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF2"
},
{
"start": 105,
"end": 123,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CRF layer",
"sec_num": "2.3"
},
{
"text": "( , ) = \u2211 , +1 + \u2211 , =1 =0 (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF layer",
"sec_num": "2.3"
},
{
"text": "represents an output matrix of Bi-LSTM, where represents the non-normalized probability of word mapped to , and represents the transition probability of to . Softmax function work for defining a probability value ( | ) as shown in formular (8) for each correct tag sequence .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF layer",
"sec_num": "2.3"
},
{
"text": "( | ) = ( , ) \u2211 ( ,\u0303) \u2208 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF layer",
"sec_num": "2.3"
},
{
"text": "In training, maximizing the likelihood probability ( | ) . Therefore, we define the loss function as \u2212 ( ( | )) , and then use the gradient descent method to learn the network. It is shown in formula (9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF layer",
"sec_num": "2.3"
},
{
"text": "log( ( | )) = log ( ( , ) \u2211 ( ,\u0303) \u2208 ) = S(X, y) \u2212 log(\u2211 ( ,\u0303) \u2208 ) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF layer",
"sec_num": "2.3"
},
{
"text": "We collect training datasets of CGED2016 (HSK) (Lee et.al, 2016) , CGED2017 (Rao et.al, 2017) , CGED2018, and CGED2020 as training dataset and validation dataset, with a total of 21938 data units. The ratio of training dataset size to validation dataset size is about 8:2. We adopt the CGED2018 testing dataset as our experimental testing dataset, with a total of 3549 data units. CGED2020 testing dataset has a total of 1457 data units. Table1 shows the number of data units, the number of errors grouped by error types in the training dataset, validation dataset, test2018, and test2020.",
"cite_spans": [
{
"start": 47,
"end": 64,
"text": "(Lee et.al, 2016)",
"ref_id": null
},
{
"start": 76,
"end": 93,
"text": "(Rao et.al, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "We segment sentences into separate characters, and tag label for every character. Label 'C' indicates correct character; 'B-X' indicates the beginning position for an error of type 'X' and 'I-X' shows the middle or ending position for an error of type 'X'. Eight kinds of labels in our data: 'B-R', 'I-R', 'B-M', 'B-S', 'I-S', 'B-W', 'I-W', and 'C'. The sample of processed data is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "In the shared task, RoBERTa-BiLSTM-CRF model (Model1) and RoBERTa-CRF model (Model2) are used. Different epochs are set on Model1 and the general parameters of two models are shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results and discussions",
"sec_num": "4.1"
},
{
"text": "\u2022 Original data format: <DOC> <TEXT id=\"200505109525100098_2_9x1\"> \u5373\u4f7f\u7236\u6bcd\u597d\u597d\u6307\u5bfc\u5b69\u5b50\uff0c\u5982\u679c\u7236\u6bcd\u6bcf\u5929 \u73a9\u7684\u8bdd\uff0c\u5bf9\u5b69\u5b50\u7684\u6548\u679c\u4e5f\u6ca1\u6709\u3002 </TEXT> <CORRECTION> \u5373\u4f7f\u7236\u6bcd\u597d\u597d\u6307\u5bfc\u5b69\u5b50\uff0c\u5982\u679c\u7236\u6bcd\u6bcf\u5929 \u73a9\u7684\u8bdd\uff0c\u5bf9\u5b69\u5b50\u7684\u6559\u80b2\u6548\u679c\u4e5f\u6ca1\u6709\u3002 </CORRECTION> <ERROR start_off=\"26\" end_off=\"26\" type=\"M\"></ERROR> <ERROR start_off=\"25\" end_off=\"25\" type=\"R\"></ERROR> <ERROR start_off=\"26\" end_off=\"30\" type=\"W\"></ERROR> </DOC> Processed data format: The following metrics at detection-level, identification-level, and position-level are Pre, Rec, F1, besides an integrated FPR. The results of Model1 (epoch=50; epoch=60) and Model2 on test2018 are shown in Table 3. F1 scores of Model1 are higher than Model2 at detection-level but lower than Model2 at identification-level and position-level. Since the BiLSTM model learns the dependency relationship between sentences, Model1 may capture error information accurately from the global sequences.",
"cite_spans": [],
"ref_spans": [
{
"start": 566,
"end": 574,
"text": "Table 3.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental results and discussions",
"sec_num": "4.1"
},
{
"text": "\u5373 C\\n \u4f7f C\\n \u7236 C\\n \u6bcd C\\n \u597d C\\n \u597d C\\n \u6307 C\\n \u5bfc C\\n \u5b69 C\\n \u5b50 C\\n\uff0c C\\n \u5982 C\\n \u679c C\\n \u7236 C\\n \u6bcd C\\n \u6bcf C\\n \u5929 C\\n \u73a9 C\\n \u7684 C\\n \u8bdd C\\n\uff0cC\\n \u5bf9 C\\n \u5b69 C\\n \u5b50 C\\n \u7684 B-R\\n \u6548 I-W\\n \u679c I-W\\n \u4e5f I-W\\n \u6ca1 I-W\\n \u6709 I-W \\n\u3002C\\n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results and discussions",
"sec_num": "4.1"
},
{
"text": "F1 scores of models with larger epoch at identification-level and position-level are higher. This is because larger epoch may lead to overfitting of Model1 at detection-level but not at identification-level and position-level. Table 4 shows the three runs submitted to the CGED2020 shared task. Run1 is based on the Model1 with 50 epochs; Run2 is based on Model2 with 50 epochs, and Run3 is based on Model1 with 60 epochs.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental results and discussions",
"sec_num": "4.1"
},
{
"text": "In this shared task, we get a good recall score of Model1 at the detection-level with bad FPR score. The reason may be as follows. The training corpus of the pre-training model, which comes from news, community discussions, and encyclopedias, is different from the training dataset of CGED, and may easily recognize correct sentences as sentences with grammatical errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results and discussions",
"sec_num": "4.1"
},
{
"text": "The performances of three runs on test2020 are consistent with that on test2018 in sum. But F1 scores of three runs on test2020 at detection-level are all higher than that of test2018. According to statistics of errors in Table 1 , a data unit contains an average of 1.4233 errors on test2018, while a data unit contains an average of 2.467 errors on test2020. This may lead to diagnosis models more easily to predict whether a sentence contains grammatical errors or not. Table 4\uff1a Results of three runs submitted in shared CGED task. Model2(epoch=50), Model1 and Model2(epoch=60) on test2020.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 473,
"end": 481,
"text": "Table 4\uff1a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental results and discussions",
"sec_num": "4.1"
},
{
"text": "After the CGED2020-TEA, we use ERNIE-BiLSTM-CRF model (Model3) to do this task. F1 score of Model1 and Model3 on test2018 and test2020 can be seen in Table 5 and Table 6 . Model1 gets a worse performance than Model3 at three levels on test2018 but better performance on test2020. The reason is that RoBERTa includes 24 transformers, 16 attention head, and 1024 hidden layer units, which make the generalization ability of RoBERTa-BiLSTM-CRF strong.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 169,
"text": "Table 5 and Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Follow-up experiments and discussions",
"sec_num": "4.2"
},
{
"text": "This paper proposes a RoBERTa-BiLSTM-CRF model to detect grammatical errors in the CGED shared task. The results of experiments show RoBERTa-BiLSTM-CRF is a good model for detecting grammatical errors in general since RoBERTa model obtains word vector according to data feature, and BiLSTM-CRF captures sentencelevel features to predict and adjust labels. In the three runs submitted, our recall ranks fourth at detection-level in the CGED shared task. In addition, we find that the performance of ERNIE-BiLSTM-CRF is unreasonable on test2020 in our experiments, we will try to pursue reasons from model structure and characters of datasets in the future work. Rao, Gaoqi, et ",
"cite_spans": [
{
"start": 661,
"end": 675,
"text": "Rao, Gaoqi, et",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "Special thanks to the organizers of CGED 2020 for their great job. We also thank the anonymous reviewers for insightful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu, Ruiji, et al. Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement. Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. 2018.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter, Sepp, and J\u00fcrgen Schmidhuber. Long short-term memory. Neural computation 9.8 (1997): 1735-1780.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, John, Andrew McCallum, and Fernando CN Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. (2001).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis",
"authors": [
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Liang-Chih",
"middle": [],
"last": "Lung-Hao",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, Lung-Hao, Liang-Chih Yu, and Li-Ping Chang. Overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications. 2015.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Liu, Yinhan, et al. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Detecting simultaneously Chinese grammar errors based on a BiLSTM-CRF model",
"authors": [
{
"first": "Yajun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Yajun, et al. Detecting simultaneously Chinese grammar errors based on a BiLSTM-CRF model. Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. 2018.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pre-trained models for natural language processing: A survey",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.08271"
]
},
"num": null,
"urls": [],
"raw_text": "Qiu, Xipeng, et al. Pre-trained models for natural language processing: A survey. arXiv preprint arXiv:2003.08271 (2020).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "IJCNLP-2017 task 1: Chinese grammatical error diagnosis",
"authors": [
{
"first": "Gaoqi",
"middle": [],
"last": "Rao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCNLP 2017",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rao, Gaoqi, et al. IJCNLP-2017 task 1: Chinese grammatical error diagnosis. Proceedings of the IJCNLP 2017, Shared Tasks. 2017.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Basic unit in LSTM, it contains forgetting gate, memory cell, input gate and output gate.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "A RoBERTa-BiLSTM-CRF model",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td>Training</td><td>Validation</td><td>Test</td><td>Test</td></tr><tr><td/><td>dataset</td><td>dataset</td><td>2018</td><td>2020</td></tr><tr><td>Units</td><td>17461</td><td>4476</td><td colspan=\"2\">3541 1457</td></tr><tr><td colspan=\"2\">Errors 42335</td><td>10583</td><td colspan=\"2\">5040 3595</td></tr><tr><td>R</td><td>9507</td><td>2377</td><td colspan=\"2\">1119 768</td></tr><tr><td>M</td><td>10963</td><td>2741</td><td colspan=\"2\">1381 816</td></tr><tr><td>S</td><td>19072</td><td>4768</td><td colspan=\"2\">2167 1688</td></tr><tr><td>W</td><td>3157</td><td>789</td><td>373</td><td>323</td></tr></table>",
"num": null,
"text": "A data unit sample of original data and processed data, every character has a label.",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td colspan=\"5\">of three experiments (two models) at three levels on test2018. Model1 represents for RoBERTa-</td></tr><tr><td colspan=\"5\">BiLSTM-CRF model, and Model2 for RoBERTa-CRF model</td></tr><tr><td>Methods</td><td/><td>Run1</td><td>Run2</td><td>Run3</td></tr><tr><td>False Positive Rate</td><td/><td>0.8708</td><td>0.7557</td><td>0.6938</td></tr><tr><td>Detection-level</td><td>Pre.</td><td>0.8118</td><td>0.8182</td><td>0.8254</td></tr><tr><td/><td>Rec.</td><td>0.9304</td><td>0.9078</td><td>0.8757</td></tr><tr><td/><td>F1</td><td>0.8671</td><td>0.8607</td><td>0.8498</td></tr><tr><td>Identification-level</td><td>Pre.</td><td>0.5899</td><td>0.6150</td><td>0.64</td></tr><tr><td/><td>Rec.</td><td>0.5126</td><td>0.5076</td><td>0.5214</td></tr><tr><td/><td>F1</td><td>0.5485</td><td>0.5562</td><td>0.5746</td></tr><tr><td>Position-level</td><td>Pre.</td><td>0.29</td><td>0.2874</td><td>0.2783</td></tr><tr><td/><td>Rec.</td><td>0.1941</td><td>0.1892</td><td>0.2042</td></tr><tr><td/><td>F1</td><td>0.2326</td><td>0.2282</td><td>0.2356</td></tr></table>",
"num": null,
"text": "",
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. 2018. Sun, Yu, et al. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223 (2019). YUa, Liang-Chih, Lung-Hao LEE, and Li-Ping CHANG. Overview of Grammatical Error Diagnosis for Learning Chinese as a Foreign Language. Zhou, Yujie, Model1 Model3 0.7719 0.7755 Identification-level 0.5675 Detection-level 0.6138 Position-level 0.3025 0.4451 Table 5\uff1a F1 scores of Model1 and Model3 on test2018 Model1 Model3 Detection-level 0.8671 0.8311 Identification-level 0.5485 0.527 Position-level 0.2326 0.2153 Table 6\uff1a F1 scores of Model1 and Model3 on test2020</td></tr></table>",
"num": null,
"text": "al. Overview of NLPTEA-2018 share task Chinese grammatical error diagnosis. Yinan Shao, and Yong Zhou. Chinese Grammatical Error Diagnosis Based on CRF and LSTM-CRF model. Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. 2018.",
"type_str": "table"
}
}
}
}