ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:42.372007Z"
},
"title": "BERT Enhanced Neural Machine Translation and Sequence Tagging Model for Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": "liangdeng@unipus.cn"
},
{
"first": "Chen",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": "zhengchen@unipus.cn"
},
{
"first": "Lei",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": "guolei@unipus.cn"
},
{
"first": "Xin",
"middle": [],
"last": "Cui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": "cuixin@unipus.cn"
},
{
"first": "Xiuzhang",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": ""
},
{
"first": "Hengqiao",
"middle": [],
"last": "Rong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": "hengqiao.rong@student.kuleuven.be"
},
{
"first": "Jinpeng",
"middle": [],
"last": "Dong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Waiyan Online Digital Technology Co.,Ltd",
"location": {}
},
"email": "dongjp@unipus.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the UNIPUS-Flaubert team's hybrid system for the NLPTEA 2020 shared task of Chinese Grammatical Error Diagnosis (CGED). As a challenging NLP task, CGED has attracted increasing attention recently and has not yet fully benefited from the powerful pre-trained BERT-based models. We explore this by experimenting with three types of models. The position-tagging models and correction-tagging models are sequence tagging models fine-tuned on pre-trained BERTbased models, where the former focuses on detecting, positioning and classifying errors, and the latter aims at correcting errors. We also utilize rich representations from BERT-based models by transferring the BERT-fused models to the correction task, and further improve the performance by pre-training on a vast size of unsupervised synthetic data. To the best of our knowledge, we are the first to introduce and transfer the BERT-fused NMT model and sequence tagging model into the Chinese Grammatical Error Correction field. Our work achieved the second-highest F1 score at the detecting errors, the best F1 score at correction top1 subtask and the second-highest F1 score at correction top3 subtask.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the UNIPUS-Flaubert team's hybrid system for the NLPTEA 2020 shared task of Chinese Grammatical Error Diagnosis (CGED). As a challenging NLP task, CGED has attracted increasing attention recently and has not yet fully benefited from the powerful pre-trained BERT-based models. We explore this by experimenting with three types of models. The position-tagging models and correction-tagging models are sequence tagging models fine-tuned on pre-trained BERTbased models, where the former focuses on detecting, positioning and classifying errors, and the latter aims at correcting errors. We also utilize rich representations from BERT-based models by transferring the BERT-fused models to the correction task, and further improve the performance by pre-training on a vast size of unsupervised synthetic data. To the best of our knowledge, we are the first to introduce and transfer the BERT-fused NMT model and sequence tagging model into the Chinese Grammatical Error Correction field. Our work achieved the second-highest F1 score at the detecting errors, the best F1 score at correction top1 subtask and the second-highest F1 score at correction top3 subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, the pre-trained language models such as BERT (Devlin et al., 2019) obtain state-of-the-art results on a wide range of natural language processing (NLP) tasks, such as text classification, reading comprehension, machine translation (Zhu et al., 2020) , etc. The English Grammatical Error Correction (GEC) task also benefits from the pretrained language models. For example, in the work of Kaneko et al. (2020) , they not only follow Zhu et al. (2020) to incorporate BERT into an Encoder-Decoder model for GEC, but also maximize the benefit by additionally training BERT on GEC corpora (BERT-fuse mask) or fine-tuning BERT as a GED model (BERT-fuse GED). Another route to improve the performance of GEC is using BERT as an encoder and incorporating it into a sequence tagging model (Malmi et al., 2019; Awasthi et al., 2019; Omelianchuk et al., 2020) .",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 241,
"end": 259,
"text": "(Zhu et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 398,
"end": 418,
"text": "Kaneko et al. (2020)",
"ref_id": "BIBREF12"
},
{
"start": 442,
"end": 459,
"text": "Zhu et al. (2020)",
"ref_id": "BIBREF30"
},
{
"start": 790,
"end": 810,
"text": "(Malmi et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 811,
"end": 832,
"text": "Awasthi et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 833,
"end": 858,
"text": "Omelianchuk et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the Chinese NLP community, a variety of pre-trained Chinese language models have been proposed and publicly available (Sun et al., 2019; Cui et al., 2019 Cui et al., , 2020 . Those models are proved to have a significant improvement in a variety of down-stream tasks, including reading comprehension, natural language inference, sentiment classification, etc.",
"cite_spans": [
{
"start": 121,
"end": 139,
"text": "(Sun et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 140,
"end": 156,
"text": "Cui et al., 2019",
"ref_id": "BIBREF4"
},
{
"start": 157,
"end": 175,
"text": "Cui et al., , 2020",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we apply the state-of-the-art English GEC models to the CGED task. Our CGED system consists of three types of models. We propose the position-tagging model, which is a sequence tagging model with a BERT encoder, to concentrate on the error localization task. The output label consists of 8 types of tags and indicates the start and end of each error for the input sentence, but it will not tell us how to correct it in the case of S (word selection) and M (missing word) errors. The correction-tagging model (Malmi et al., 2019; Awasthi et al., 2019; Omelianchuk et al., 2020) concentrates on the error correction task, and the output label contains 8772 types of tags. The tags reveal the editing operations for each Chinese character, e.g. KEEP, DELETE, APPEND, and REPLACE. The APPEND tags (3788 in total) and REPLACE tags (4982 in total) cover most Chinese characters.",
"cite_spans": [
{
"start": 523,
"end": 543,
"text": "(Malmi et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 544,
"end": 565,
"text": "Awasthi et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 566,
"end": 591,
"text": "Omelianchuk et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The BERT-fused model (Zhu et al., 2020) is proposed for Neural Machine Translation (NMT) task and adaptively controls the interaction between representations from BERT and each layer of the Transformer (Vaswani et al., 2017) by using the attention mechanism. (Kaneko et al., 2020) transfers the BERT-fused model to the English GEC task and further advances it. Due to time limitations, we only follow the training settings in (Zhu et al., 2020) . Besides, we perform unsupervised data augmentation by introducing synthetic errors on a large amount of error-free corpora, then pair synthetic and original sentences to pre-train Transformers (Grundkiewicz et al., 2019) .",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "(Zhu et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 202,
"end": 224,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 259,
"end": 280,
"text": "(Kaneko et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 426,
"end": 444,
"text": "(Zhu et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 640,
"end": 667,
"text": "(Grundkiewicz et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows: Section 2 summarizes the recent developments in the field of CGED. Section 3 introduces the dataset we used to train the models, including human-annotated data and synthetic data. Section 4 is the overview of each component of our system, including BERTfused NMT, position-tagging model, correctiontagging model, and error annotation tool. Section 5 describes our training and ensemble process. Section 6 discusses the result of our models and Section 7 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Zhao et al. (2015) used a statistical machine translation method to the CGED task and examined corpus-augmentation and explored alternative translation models including syntax-based and hierarchical phrase-based models. Zheng et al. (2016) , and Liao et al. (2017) treat the CGED task as a sequence tagging problem to detect the grammatical errors. Li and Qi (2018) applied a policy gradient LSTM model to the CGED task. Fu et al. (2018b) built a CGED system based on a BiLSTM-CRF model and combined with rule-based templates to bring in grammatical knowledge. Hu et al. (2018) employed a sequence-to-sequence model and used pseudo data to pre-training the model. designed a system for CGED which is composed of a BiLSTM-CRF model, an NMT model, and a statistical machine translation model to detect and correct the grammatical errors. A similar system achieved a competitive result in NLPCC 2018 shared task. Fu et al. (2018a) also treated the CGED task as a translation problem and used character-based and sub-word based NMTs to correct the grammatical errors. and Ren et al. (2018) introduced the convolutional sequence-to-sequence model into the CGED task.",
"cite_spans": [
{
"start": 220,
"end": 239,
"text": "Zheng et al. (2016)",
"ref_id": "BIBREF28"
},
{
"start": 246,
"end": 264,
"text": "Liao et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 349,
"end": 365,
"text": "Li and Qi (2018)",
"ref_id": "BIBREF13"
},
{
"start": 561,
"end": 577,
"text": "Hu et al. (2018)",
"ref_id": null
},
{
"start": 910,
"end": 927,
"text": "Fu et al. (2018a)",
"ref_id": "BIBREF6"
},
{
"start": 1068,
"end": 1085,
"text": "Ren et al. (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Training data The datasets of the NLPTEA 2014\u223c2018 & 2020 shared task of CGED are corpora composed of parallel sentences written by Chinese as a Foreign Language (CFL) learners and their corrections. The source sentences are selected from the essay section of the computer-based TOCFL (Test of Chinese as a Foreign Language) and written-based HSK (Pinyin of Hanyu Shuiping Kaoshi, Test of Chinese Level). Before 2016, there are only TOCFL data written in traditional Chinese. In the dataset of 2016, we have both TOCFL and HSK data. We use the opencc 1 package to convert the traditional Chinese to simplified Chinese for the TOCFL corpus. Since 2017, only HSK data are provided that are all written in simplified Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The grammatical errors were manually annotated by native Chinese speakers. There are four kinds of errors: R (redundant word), M (missing word), S (word selection error), and W (word ordering error). Each error type has a different proportion in the corpus and each sentence may contain several errors. For example, in the CGED 2020 training set, W/S/R/M accounted for 7%, 42%, 23%, 28% of the total errors respectively. There are 2909 manually annotated errors in 1641 sentences, and only 2 sentences are error-free.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We also collect several external datasets from NLPCC 2018 GEC 2 and other resources 3 . The NLPCC 2018 GEC data contains more than 700,000 sentences and each sentence may be correct or have one or more candidate corrections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We train BERT-fused NMT models in pre-training mode and no pre-training mode. For pre-training mode, the model is pre-trained on a large amount of synthetic data (Grundkiewicz et al., 2019) . The other models did not use the synthetic data.",
"cite_spans": [
{
"start": 162,
"end": 189,
"text": "(Grundkiewicz et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic data",
"sec_num": null
},
{
"text": "We first split each error-free sentence into words by a Chinese word segmentation tool 4 , and then randomly select several words for each sentence. The number of selected word is the product of a probability which is sampled from the normal distribution and the number of words in the sentence. For each selected word, one of the four operations including substitution, deletion, insertion, and transposition is performed with a probability of 0.5, 0.2, 0.2, 0.1, which simulates the proportions of S, M, R, W errors in the CGED data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic data",
"sec_num": null
},
{
"text": "For substitution, the selected word is replaced by a word that has a similar meaning, pronunciation,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic data",
"sec_num": null
},
{
"text": "Input sentence \u53ef\u662f\uff0c\u5728\u5927 \u962a\u4e0d\u51fa\u6885\u96e8\u3002 Position-tagging Models Correction- tagging Models BERT-NMT Models (7, 8, S) (7, 8, S) (7, 8, S) (7, 7, M) (8, 8, S) (4, 4, R) (4, 4, R) (4, 4, R) (11, 11, M, \u5b63) (11, 11, M, \u5b63) (11, 11, M, \u5b63) (7, 8, S, \u6ca1\u6709) (7, 8, S, \u6ca1\u6709) (7, 8, S, \u6ca1\u6709) (7, 8, S, \u6ca1\u6709) (11, 11, M, \u5b63\u8282) (11, 11, M, \u5b63\u8282)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic data",
"sec_num": null
},
{
"text": "Edit sets: (start, end, type, correction)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic data",
"sec_num": null
},
{
"text": "Edit sets: (start, end, type) Figure 1 : A demonstration of our hybrid system using a real sentence from the CGED 2020 test set. Each edit format (start, end, type, correction) stands for an error and its start position, end position, type and correction. Here, all groups of models have an equal weight 1 and the threshold is set to 7. or shape. To simulate the confusion from similar meaning, we randomly choose a replacement from the following sources: (1) synonyms of the selected word 5 with a word similarity greater than 0.75; (2) a Chinese dictionary that we can search the word contain at least one character identical to the selected word; (3) a confusion dictionary consists of Japanese and Chinese word pairs that might be misused by Japanese learners. To mimic the confusion from similar pronunciation, we replace the selected word with a word that has the same pinyin. When introducing confusion from similar shapes, we define the similarity between two characters by their four-corner code 6 .",
"cite_spans": [
{
"start": 1005,
"end": 1006,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic data",
"sec_num": null
},
{
"text": "(7, 8, S) X 7 (11, 11, M) X 5 (4, 4, R) X 3 (7, 7, M) X 1 (8, 8, S) X 1 (7, 8, S, \u6ca1\u6709) X 4 (11, 11, M, \u5b63) X 3 (11, 11, M,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voting of All Edits",
"sec_num": null
},
{
"text": "For deletion, we simply remove the selected word. For insertion, we add on a word randomly taken from a set after the selected word. The set consists of stop words 7 and redundant words from R errors in the past CGED dataset. For transpo-5 https://github.com/chatopera/Synonyms 6 http://code.web.idv.hk/misc/four.php? i=3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voting of All Edits",
"sec_num": null
},
{
"text": "7 https://github.com/goto456/stopwords sition, we swap the selected word with the next word or with a random word in the sentence. We skip the named entities for substitution and deletion operations. After introducing the word-level error to each error-free sentence, we introduce character-level errors by similar methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voting of All Edits",
"sec_num": null
},
{
"text": "The corpora we used to generate synthetic data are the wiki2019zh (9.64 million sentences), the news2016zh (51.4 million sentences), the web-text2019zh (1.06 million sentences) 8 and the So-gouCA (0.94 million sentences) 9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voting of All Edits",
"sec_num": null
},
{
"text": "Our system consists of a sequence labeling model concentrated on the error detection subtask, and two types of error correction models aimed at generating candidate corrections. 1: Summary of the three training sets we constructed to train the BERT-NMT models at different stages. The number after the multiplication sign stands for how many times the data was oversampled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "4"
},
{
"text": "The position-tagging model is a sequence tagging model aimed to locate grammatical errors. We use RoBERTa 10 as the model's encoder then fine-tune it during training. The output tags are generated by applying a softmax layer over the encoder's logits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position-tagging Model",
"sec_num": "4.1"
},
{
"text": "Given a sequence of Chinese characters as input, the model predicts the label of each character. The output label consists of 8 types of tag, including O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position-tagging Model",
"sec_num": "4.1"
},
{
"text": "(correct), B-S (begin of S), I-S (middle of S), B-W (begin of W), I-W (middle of W), B-M (begin of M), B-R (begin of R)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position-tagging Model",
"sec_num": "4.1"
},
{
"text": ", and I-R (middle of R). We extract the location and type of each error directly from the output labels. For S and M errors, the model can not give any candidate corrections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Position-tagging Model",
"sec_num": "4.1"
},
{
"text": "The BERT-fused NMT model proposed in (Zhu et al., 2020 ) aims at the NMT task, we transfer the original work to the correction subtask. The BERTfused NMT model is made up of two modules: the NMT module and the BERT module. Both modules take erroneous sentences as input. We start with training a Transformer from scratch until it converges. Then, we use the encoder and decoder of this Transformer to initialize the encoder and decoder of the NMT module. The BERT module is identical to a ready-made pre-trained BERT model. The way to fuse the NMT module and the BERT module is to feed the representations from the BERT module (i.e. the output of the last layer of the BERT module) to each layer of the NMT module. Taking the NMT encoder as an example, the BERT-encoder attention is introduced into each NMT encoder layer and processes the representations from the BERT module. The original selfattention of each NMT encoder layer still processes the representations from the previous NMT encoder layer. The output of the BERT-encoder attention and the original self-attention are further processed by the encoder layer's original feedforward network. The NMT decoder works similarly by introducing BERT-decoder attention to each NMT decoder layer.",
"cite_spans": [
{
"start": 37,
"end": 54,
"text": "(Zhu et al., 2020",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "4.2"
},
{
"text": "The parameters of the BERT-encoder attention and BERT-decoder attention are randomly initialized. During the training of the BERT-fused NMT model, the parameters of the BERT module are fixed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "4.2"
},
{
"text": "The correction-tagging model is a sequence tagging model 11 specific to the GEC task. The output labels consist of 8772 tags, which form a large edit space. We obtain corrections by iteratively feeding a sentence to the model, getting the edit operations of each character, then editing the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "To prepare the training data, we first convert the target sentence into a sequence of tags where each tag represents an edit operation on each source token. Take the following sentence pair as an example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "Source: \u7a81 \u7136 \u98ce \u8d77 \u6765 \u522e \u4e86 \u3002 Target: \u7a81 \u7136 \u522e \u8d77 \u98ce \u6765 \u4e86 \u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "We use the minimum edit distance algorithm to align the source tokens with the target tokens. For each mapping in alignment, we collect the edit steps from the source token to the target subsequence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "\u7a81 KEEP \u7136 KEEP & APPEND \u522e & APPEND \u8d77 \u98ce KEEP \u8d77 DELETE \u6765 KEEP \u522e DELETE \u4e86 KEEP \u3002 KEEP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "Lastly, we leave only one edit for each source token, because in the training stage, each token can only have one label. In the case of the above example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "\u7a81 KEEP \u7136 APPEND \u522e \u98ce KEEP \u8d77 DELETE \u6765 KEEP \u522e DELETE \u4e86 KEEP \u3002 KEEP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "The correction-tagging model is a pre-trained BERT-like Transformer encoder stacked with two linear layers and softmax layers on the top.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "In the inference stage, we tag and edit the sentence iteratively to obtain a fully corrected sentence. In each iteration, we apply the edits according to the output labels on the input sentence and send the edited sentence to the next iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "4.3"
},
{
"text": "For the BERT-fused NMT and correction-tagging model, the final output is a corrected sentence. To match with the official submission format, we align the target sentence with the source sentence to locate the start and end of the error and classify error types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Classification",
"sec_num": "4.4"
},
{
"text": "In the field of GED, there is a widely used error annotation tool -errant (Bryant et al., 2017) , which automatically annotates error type information of parallel English sentences. However, there is no such tool in the CGED task. We developed a simple rule-based annotation tool to locate the error and classify the error type. Our tool first segment the source and target sentence into words using Jieba 12 , then align the source and target words based on the minimum edit distance algorithm. In each mapping, if the blocks of source and target words are not the same, our tool judges this mapping as a grammatical error and determines the position and type of this error.",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Classification",
"sec_num": "4.4"
},
{
"text": "However, even if we have the golden corrected sentence, there exists some ambiguity when localizing and classifying the error. For example, in the CGED 2020 training set, given the following sentence pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Classification",
"sec_num": "4.4"
},
{
"text": "Source: \u9996\u5148\u901a\u8fc7\u5bf9\u8bdd\u6765\u77e5\u9053\u5b50\u5973\u7684 \u7231\u597d\u3001\u4ef7\u503c\u89c2\uff0c\u7136\u540e\u4e00\u8d77\u76f8 \u53d7\u62e5\u7740\u5171\u540c\u7684\u7231\u597d\u3002 Target: \u9996\u5148\u901a\u8fc7\u5bf9\u8bdd\u6765\u77e5\u9053\u5b50\u5973\u7684 \u7231\u597d\u3001\u4ef7\u503c\u89c2\uff0c\u7136\u540e\u4e00\u8d77\u62e5 \u6709\u5171\u540c\u7684\u7231\u597d\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Classification",
"sec_num": "4.4"
},
{
"text": "The official result is an S error starts from the 24th character and ends at the 27th character (\"\u76f8\u53d7\u62e5 \u7740\") with a correction \"\u62e5\u6709\". But there may be many possible solutions that depend on the word segmentation. For example, if we split \"\u76f8\u53d7\u62e5 12 https://github.com/fxsjy/jieba \u7740\" into \"\u76f8\u53d7\" and \"\u62e5\u7740\"\uff0cthe result becomes an R error starts from the 24th character and ends at the 25th character and an S error starts from the 26th character and end the 27th character (\"\u62e5\u7740\") with a substitution \"\u62e5\u6709\". So, it is hard to locate and classify errors unambiguously due to different word segmentation rules. We tested our annotation tool on the CGED 2020 training data set, which are shown in Table 2 . Our error annotation tool loses some precision and recall at the detection, identification, and position subtasks when annotating the error information from parallel sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 679,
"end": 686,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Error Classification",
"sec_num": "4.4"
},
{
"text": "We trained the position-tagging models with two different combinations of CGED data and used the CGED 2016 test set as the development set. For each data combination, we tried serval models with different parameter initialization and training settings. When using CGED 2016 (HSK)\u223c2018 & 2020 training set and 2017 test set as the training set, we get the best performance of the F1 score on detection and identification subtask on the CGED 2018 test set. When adding the TOCFL data from 2014 to 2016 to the training set, we get the best performance of the F1 score on the position subtask(see Table 3 ). Four position-tagging models (two models from each data combination) are used in ensemble modeling.",
"cite_spans": [],
"ref_spans": [
{
"start": 593,
"end": 600,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Position-tagging Model",
"sec_num": "5.1"
},
{
"text": "We prepared several datasets to train the BERTfused NMT models. The first dataset is named Pre-Training data (PT data) consisting of synthetic sentences from the wiki2019zh corpus and the news2016zh corpus. The second dataset is the Manually Annotated data (MA data) which is composed of the CGED 2016\u223c2018 training set, HSK, and NLPCC 2018 GEC data. We filtered out the errorfree sentences in HSK and NLPCC 2018 GEC dataset and oversampled the CGED data. The last dataset is the Augmented Manually-Annotated data (AMA data) consists of oversampled MA data and synthetic sentences from the text2019zh corpus and the SogouCA corpus. See details at Table 1. We trained BERT-fused NMT models in pretraining mode and non-pre-training mode. For non-pre-training mode, we trained the BERT-fused NMT in the following steps: (1) train a baseline Table 4 : The results of our correction models and the ensemble on correction top1 subtask on the CGED 2018/2020 test set. The first group shows the results of the correction-tagging model with various encoders. The second / third group shows the results of the BERT-fused NMT models in non-pre-trained / pre-trained mode. The asterisk after the model name indicates that the model participates in the final ensemble. The model BERT-fused (AMA) in the third group is not used in the ensemble stage due to the time limit of the competition, and the training was completed after the deadline. The original scores of the ensemble on the CGED 2020 test set are P = 0.2848, R = 0.1415, F1 = 0.1891. We recalculated scores after an update of the error annotation tool and got a slight improvement on the final performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 647,
"end": 655,
"text": "Table 1.",
"ref_id": null
},
{
"start": 838,
"end": 845,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "Transformer from scratch on MA data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "(2) train a BERT-fused model on MA data using the baseline Transformer trained in the previous step;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "(3) fine-tune the previous step's BERT-fused model on AMA data. For pre-training mode, we trained the model in the following steps: (1) pre-train a Transformer from scratch on PT data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "(2) fine-tune the previous step's pre-trained Transformer on AMA data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "(3) train a BERT-fused model using the finetuned Transformer from the previous step on MA data and AMA data respectively. In all the training steps above, we combined the CGED 2018 test set and the CGED 2020 training set as the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "We use the fairseq to train Transformers and the bert-nmt to train BERTfused models 13 . We use Transformer Base architecture to train all the Transformer models and reset the learning rate scheduler and optimizer parameters when training the fine-tuned Transformer and BERT-fused model. The parameters of the fine-tuned Transformer are used to initialize the encoder and decoder of the BERT-fused model. BERT-encoder attention and BERT-decoder attention are randomly initialized. We adopt the label smoothed cross-entropy as a loss function. The overall performance of each NMT model are listed in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 606,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT-fused NMT",
"sec_num": "5.2"
},
{
"text": "The training of the correction-tagging model is decomposed into two stages, which are inspired by Omelianchuk et al. (2020) . The first stage uses all training sets from CGED 2014\u223c2018 and NLPCC 2018 as the training set and the CGED 2020 training set as the development set. For NLPCC 2018 training set, we discard the sentence that is correct or has more than one correction. The second stage fine-tunes on 80% CGED 2020 training set and takes the other 20% as the development set.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "Omelianchuk et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "5.3"
},
{
"text": "The difference between our training process and Omelianchuk et al. 2020is that we do not use synthetic data to pre-train the model. It will be investigated in future work that if a pre-training step on a large synthetic data set can improve the performance of the current model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "5.3"
},
{
"text": "We fine-tune four models using the BERT (Devlin et al., 2019) , RoBERTa 14 , ELECTRA (Clark et al., 2020) 15 , and XLNet (Yang et al., 2019) 16 encoders. The learning rate for each model on the first stage is 2e-5, 2e-5, 4e-5, and 4e-5 respectively, and all 1e-5 on the second stage. In the first stage, we freeze the encoder's weights for the first epoch and set the learning rate to 1e-3.",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 85,
"end": 105,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 121,
"end": 143,
"text": "(Yang et al., 2019) 16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "5.3"
},
{
"text": "We adjust several hyperparameters after finetuning the models. The first is a threshold of the KEEP tag probability. If the KEEP tag probability is greater than the threshold, we will not change the source token. The other hyperparameters are the threshold of sentence-level minimum error probability and the number of iterations. These hyperparameters are tuned on the CGED 2018 test set to trade-off precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "5.3"
},
{
"text": "A simple ensemble of RoBERTa and BERT got an additional boost of the F1 score. We use BERT, RoBERTa, and their ensemble during the ensemble modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "5.3"
},
{
"text": "Both the BERT-fused NMT models and correction-tagging models are character-based instead of word-based for two reasons. First, the Chinese word segmentation tools are usually trained on grammatical sentences and will generate unexpected word segmentation results when applied to erroneous sentences. Second, word-based models use a larger vocabulary dictionary and more data is needed to obtain well-trained models, which conflicts with the fact that CGED is obviously a low-resource task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correction-tagging Model",
"sec_num": "5.3"
},
{
"text": "We adopt a weighted voting strategy inspired by . The output of position-tagging models provides the position and type of each error but lack corrections for S and M errors. The output of BERT-fused NMT models and correction-tagging models are corrected sentences and are converted into the official submission format using our annotation tool in Section 4.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Modeling",
"sec_num": "5.4"
},
{
"text": "First, we omit the corrections for S and M errors temporarily and vote to determine the result of the position and type of all the errors. We accept an error proposal only if it gets the votes more than a threshold. A sentence is treated as correct if all its error proposals are not accepted. the corrections. For each accepted S and M error, we rank the candidate corrections from the BERTfused NMT models and correction-tagging models according to votes. We take the first three candidates as the final corrections. A demonstration of our ensemble strategy is showed in Figure 1 . Each group of models has different weights during voting. All the thresholds and weights are tuned on the CGED 2018 test set using grid search, aiming at obtaining the best F1 score in the correction top1 subtask. The official evaluation of our three submissions are described in Table 5 . Run 1 got 1st place in the correction top1 subtask and 2nd place in the correction top3 subtask. The difference between Run 1 and Run 2 is that the hyperparameter of n-best in BERT-fused NMT models is set to 1 and 8 respectively. For Run 2 (n-best is 8), each BERT-fused NMT model generates 8 candidate sentences and all take part in the voting. Run 3 tried a different ensemble modeling which mainly focused on improving recall and got the 2nd place at the detection subtask.",
"cite_spans": [],
"ref_spans": [
{
"start": 573,
"end": 581,
"text": "Figure 1",
"ref_id": null
},
{
"start": 864,
"end": 871,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ensemble Modeling",
"sec_num": "5.4"
},
{
"text": "For the BERT-fused NMT models, the BERT-fused stage improves the F1 scores for both non-pretraining and pre-training mode (See Table 4 ). In the non-pre-training mode, fine-tuning on AMA data further improves the performance on the CGED 2020 test set. By comparing the Baseline Transformer at the non-pre-training mode with the Finetuned Transformer at the pre-training mode, we find a substantial improvement of the performance on both the CGED 2018 and 2020 test sets. This proves that the CGED task can benefit from pretraining on synthetic data. However, the best results of the non-pre-training mode surpass the pretraining mode unexpectedly after the BERT-fused stage. We will investigate the reason in the future work. (Kaneko et al., 2020) demonstrated that the GED task can help improve the performance of the GEC task. Due to time limitations, we did not try to combine the detection and correction processes in our system, which can be further improved in the future work.",
"cite_spans": [
{
"start": 726,
"end": 747,
"text": "(Kaneko et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In the ensemble modeling, we found that FPR (False Positive Rate) decreased as the threshold in the voting stage increased. Our submissions did not rank high in the FPR subtask, since we focused on the detection and correction rather than the FPR subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Compared to the methods proposed in the NLPTEA 2018 shared task of CGED, our system greatly improves the F1 score on correction top1 and correction top3 subtask on the CGED 2018 test set. This advance mainly comes from: (1) we not only fully exploit the Transformer model for the correction subtask, but also comprehensively incorporate the power of pre-trained BERT-based models into every subtask of the CGED task; (2) the low-resource problem in the GEC task restricts the performance of NMT models (Junczys-Dowmunt et al., 2018), and we address this by utilizing the power of pre-trained BERT models and synthesizing extensive artificial data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this work, we present our solutions to the NLPTEA 2020 shared task of CGED. Three kinds of models are used in our system: position-tagging models, BERT-fused NMT models and correctiontagging models. Our hybrid system achieved the second-highest F1 score in the detection subtask, the highest F1 score in the correction top1 subtask and the second-highest F1 score in the correction top3 subtask, which shows that the CGED task can benefit from the recent advances of pre-trained language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/BYVoid/OpenCC 2 http://tcci.ccf.org.cn/conference/ 2018/taskdata.php 3 https://github.com/shibing624/ pycorrector 4 https://github.com/fxsjy/jieba",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/brightmart/nlp_ chinese_corpus 9 http://www.sogou.com/labs/resource/ca. php",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "RoBERTa-wwm-ext-large, from https://github. com/ymcui/Chinese-BERT-wwm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/grammarly/gector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/bert-nmt/bert-nmt, the pre-trained BERT from https://huggingface. co/bert-base-chinese14 BERT-wwm-ext and RoBERTa-wwm-ext-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parallel iterative edit models for local sequence transduction",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Awasthi",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "Rasna",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Sabyasachi",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Vihari",
"middle": [],
"last": "Piratla",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4259--4269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In 2019 Conference on Empirical Methods in Natu- ral Language Processing, pages 4259-4269.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic annotation and evaluation of error types for grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "793--805",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 793-805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Electra: Pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR 2020 : Eighth International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators. In ICLR 2020 : Eighth International Conference on Learning Representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Revisiting pretrained models for chinese natural language processing",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13922"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi- jin Wang, and Guoping Hu. 2020. Revisiting pre- trained models for chinese natural language process- ing. arXiv preprint arXiv:2004.13922.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-training with whole word masking for chinese bert",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08101"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Youdao's winning solution to the nlpcc-2018 task 2 challenge: A neural machine translation approach to chinese grammatical error correction",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yitao",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2018,
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "341--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Fu, Jin Huang, and Yitao Duan. 2018a. Youdao's winning solution to the nlpcc-2018 task 2 challenge: A neural machine translation approach to chinese grammatical error correction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 341-350.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Zhengqi",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Jiefu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dechuan",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "52--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, and Ting Liu. 2018b. Chinese grammatical error di- agnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement. In Proceedings of the 5th Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 52-59.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Ed- ucational Applications, pages 252-263.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Ling@cass solution to the nlp-tea cged shared task 2018",
"authors": [],
"year": null,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "70--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling@cass solution to the nlp-tea cged shared task 2018. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Edu- cational Applications, pages 70-76.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Approaching neural grammatical error correction as a low-resource machine translation task",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Shubha",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "595--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Ap- proaching neural grammatical error correction as a low-resource machine translation task. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 595-606.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL 2020: 58th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4248--4254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In ACL 2020: 58th annual meeting of the Association for Computational Linguistics, pages 4248-4254.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Chinese grammatical error diagnosis based on policy gradient lstm model",
"authors": [
{
"first": "Changliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changliang Li and Ji Qi. 2018. Chinese grammatical error diagnosis based on policy gradient lstm model. In Proceedings of the 5th Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 77-82.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A hybrid system for chinese grammatical error diagnosis and correction",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Junpei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zuyi",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Hengyou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guangwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "60--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Li, Junpei Zhou, Zuyi Bao, Hengyou Liu, Guang- wei Xu, and Linlin Li. 2018. A hybrid system for chinese grammatical error diagnosis and correction. In Proceedings of the 5th Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 60-69.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Chinese grammatical error correction based on convolutional sequence to sequence model",
"authors": [
{
"first": "Si",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Guirong",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yuanpeng",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Huifang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Guang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Zhiqing",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "72905--72913",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Si Li, Jianbo Zhao, Guirong Shi, Yuanpeng Tan, Huifang Xu, Guang Chen, Haibo Lan, and Zhiqing Lin. 2019. Chinese grammatical error correction based on convolutional sequence to sequence model. IEEE Access, 7:72905-72913.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ynu-hpcc at ijcnlp-2017 task 1: Chinese grammatical error diagnosis using a bidirectional lstm-crf model",
"authors": [
{
"first": "Quanlei",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jinnan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xuejie",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCNLP 2017, Shared Tasks",
"volume": "",
"issue": "",
"pages": "73--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanlei Liao, Jin Wang, Jinnan Yang, and Xuejie Zhang. 2017. Ynu-hpcc at ijcnlp-2017 task 1: Chinese grammatical error diagnosis using a bi- directional lstm-crf model. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 73-77.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Encode, tag, realize: High-precision text editing",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Malmi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Daniil",
"middle": [],
"last": "Mirylenka",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5053--5064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In 2019 Conference on Empirical Methods in Natural Lan- guage Processing, pages 5053-5064.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Gector -grammatical error correction: Tag, not rewrite",
"authors": [
{
"first": "Kostiantyn",
"middle": [],
"last": "Omelianchuk",
"suffix": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Atrasevych",
"suffix": ""
},
{
"first": "Artem",
"middle": [
"N"
],
"last": "Chernodub",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Skurzhanskyi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "163--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem N. Chernodub, and Oleksandr Skurzhanskyi. 2020. Gector -grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163-170.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 48-53.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A sequence to sequence learning for chinese grammatical error correction",
"authors": [
{
"first": "Liner",
"middle": [],
"last": "Hongkai Ren",
"suffix": ""
},
{
"first": "Endong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xun",
"suffix": ""
}
],
"year": 2018,
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "401--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongkai Ren, Liner Yang, and Endong Xun. 2018. A sequence to sequence learning for chinese grammat- ical error correction. In CCF International Confer- ence on Natural Language Processing and Chinese Computing, pages 401-410.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ernie: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Alibaba at ijcnlp-2017 task 1: Embedding grammatical features into lstms for chinese grammatical error diagnosis task",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Pengjun",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Guangwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCNLP 2017, Shared Tasks",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Pengjun Xie, Jun Tao, Guangwei Xu, Linlin Li, and Si Luo. 2017. Alibaba at ijcnlp-2017 task 1: Embedding grammatical features into lstms for chinese grammatical error diagnosis task. In Pro- ceedings of the IJCNLP 2017, Shared Tasks, pages 41-46.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "NeurIPS 2019 : Thirtythird Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS 2019 : Thirty- third Conference on Neural Information Processing Systems, pages 5753-5763.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoyu",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Jingming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.00138"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical er- ror correction via pre-training a copy-augmented architecture with unlabeled data. arXiv preprint arXiv:1903.00138.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improving chinese grammatical error correction with corpus augmentation and hierarchical phrase-based statistical machine translation",
"authors": [
{
"first": "Yinchen",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "111--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinchen Zhao, Mamoru Komachi, and Hiroshi Ishikawa. 2015. Improving chinese grammatical er- ror correction with corpus augmentation and hier- archical phrase-based statistical machine translation. In Proceedings of the 2nd Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 111-116.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Chinese grammatical error diagnosis with long short-term memory networks",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "NLP-TEA@COLING",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Zheng, Wanxiang Che, Jiang Guo, and Ting Liu. 2016. Chinese grammatical error diagnosis with long short-term memory networks. In NLP- TEA@COLING, pages 49-56.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Chinese grammatical error correction using statistical and neural models",
"authors": [
{
"first": "Junpei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hengyou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zuyi",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Guangwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "117--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junpei Zhou, Chen Li, Hengyou Liu, Zuyi Bao, Guang- wei Xu, and Linlin Li. 2018. Chinese grammatical error correction using statistical and neural models. In CCF International Conference on Natural Lan- guage Processing and Chinese Computing, pages 117-128.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Incorporating bert into neural machine translation",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wengang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tieyan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR 2020 : Eighth International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2020. Incorporating bert into neural machine translation. In ICLR 2020 : Eighth International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "The test results of the error annotation tool. Given an original and corrected sentence pair from CGED 2020 training set, the tool extracts the position and type of each error. We compare the output of the tool with the standard result and get the F1 scores of each error type.",
"num": null,
"content": "<table><tr><td>Model</td><td colspan=\"3\">Detection Identification Position</td></tr><tr><td colspan=\"2\">Data comb. 1 0.780</td><td>0.644</td><td>0.399</td></tr><tr><td colspan=\"2\">Data comb. 2 0.776</td><td>0.641</td><td>0.428</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "The best results of the position-tagging model on the CGED 2018 test set. The data comb. 1 is the model trained on CGED 2016 (HSK)\u223c2018 & 2020 training set and 2017 test set, the data comb. 2 is the model trained on more data which added TOCFL 2014\u223c2016 data. The former gets the best performance of the F1 score on detection and identification subtask and the latter gets the best performance on the position subtask.",
"num": null,
"content": "<table><tr><td>2018 test set</td><td>2020 test set</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "The overall F1 scores of our three submissions.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}