ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:44.549805Z"
},
"title": "Integrating BERT and Score-based Feature Gates for Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Yongchang",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "National Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": ""
},
{
"first": "Liang",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "National Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": "heliang@smail.nju.edu.cn"
},
{
"first": "Robert",
"middle": [],
"last": "Ridley",
"suffix": "",
"affiliation": {
"laboratory": "National Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": "robertr@smail.nju.edu.cn"
},
{
"first": "Xinyu",
"middle": [],
"last": "Dai",
"suffix": "",
"affiliation": {
"laboratory": "National Key Laboratory for Novel Software Technology",
"institution": "Nanjing University",
"location": {
"postCode": "210023",
"settlement": "Nanjing",
"country": "China"
}
},
"email": "daixinyu@nju.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our proposed model for the Chinese Grammatical Error Diagnosis (CGED) task in NLPTEA2020. The goal of CGED is to use natural language processing techniques to automatically diagnose Chinese grammatical errors in sentences. To this end, we design and implement a CGED model named BERT with Score-feature Gates Error Diagnoser (BSGED), which is based on the BERT model, Bidirectional Long Short-Term Memory (BiLSTM) and conditional random field (CRF). In order to address the problem of losing partial-order relationships when embedding continuous feature items as with previous works, we propose a gating mechanism for integrating continuous feature items, which effectively retains the partial-order relationships between feature items. We perform LSTM processing on the encoding result of the BERT model, and further extract the sequence features. In the final test-set evaluation, we obtained the highest F1 score at the detection level and are among the top 3 F1 scores at the identification level.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our proposed model for the Chinese Grammatical Error Diagnosis (CGED) task in NLPTEA2020. The goal of CGED is to use natural language processing techniques to automatically diagnose Chinese grammatical errors in sentences. To this end, we design and implement a CGED model named BERT with Score-feature Gates Error Diagnoser (BSGED), which is based on the BERT model, Bidirectional Long Short-Term Memory (BiLSTM) and conditional random field (CRF). In order to address the problem of losing partial-order relationships when embedding continuous feature items as with previous works, we propose a gating mechanism for integrating continuous feature items, which effectively retains the partial-order relationships between feature items. We perform LSTM processing on the encoding result of the BERT model, and further extract the sequence features. In the final test-set evaluation, we obtained the highest F1 score at the detection level and are among the top 3 F1 scores at the identification level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, with the continuous development of China, more and more people have begun to learn Chinese as their second language. Due to the many complexities of Chinese, such as the differences in how tenses are formed in Chinese and English, many learners mistakenly write many Chinese sentences with grammatical errors when they first learn Chinese. Therefore, it is necessary to develop a CGED system, which can not only improve the learning efficiency of Chinese learners, but also serve many downstream tasks based on Chinese corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Compared with English grammatical error diagnosis, Chinese grammatical error correction has received limited interest in the research community. English grammar error detection models began being developed as early as the 1980s, such as the early Writer's Workbench system (Macdonald NH, 1983) for detecting punctuation errors and style errors. Later, a series of tasks for English grammatical error detection and correction were proposed, such as CoNLL-2013 (Ng et al., 2013 and CoNLL-2014 (Ng et al., 2014 . With the release of the CGED task in the NLPTEA workshop in recent years, grammar diagnosis models for Chinese have also begun to be developed.",
"cite_spans": [
{
"start": 273,
"end": 293,
"text": "(Macdonald NH, 1983)",
"ref_id": null
},
{
"start": 448,
"end": 458,
"text": "CoNLL-2013",
"ref_id": null
},
{
"start": 459,
"end": 475,
"text": "(Ng et al., 2013",
"ref_id": "BIBREF9"
},
{
"start": 480,
"end": 490,
"text": "CoNLL-2014",
"ref_id": null
},
{
"start": 491,
"end": 507,
"text": "(Ng et al., 2014",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of the CGED task is to use natural language processing techniques to diagnose grammatical errors in Chinese sentences written by learners who use Chinese as a second language. The CGED task allows researchers to exchange experiences and ultimately promote the development of this shared task. It defines four types of Chinese grammatical errors, which are: redundant words (denoted as a capital \"R\"), missing words (\"M\"), word selection errors (\"S\"), and word ordering errors (\"W\"). The system developed for this task needs to identify the type and location of the errors in the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most recent solutions to the CGED shared task convert the problem into a sequence labeling problem and use a BiLSTM-CRF-based architecture as a basic framework to train the model. However, in previous work, feature engineering for the input sequence has become more and more complex. In addition, for some score-based features which exhibit partial-order relationships, such as the commonly used PMI Score features, previous works usually learn their embedding matrix after discretizing the scores. Through this process, the partial-order relationships between items will be lost, and the dimensionality of the feature embedding matrix will gradually increase as the granularity of the score discretization becomes finer, increasing the number of parameters needed to be trained. In response to the above problems, we design and implement BERT with Score-feature Gates Error Diagnoser (BSGED), and integrate score-based features through the use of a gating mechanism, which not only greatly reduces the workload of feature engineering, but also retains the original partial-order relationships for score-based features. Experiments verify that the BSGED model achieves excellent results with less feature engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel model BSGED for the CGED task, which achieves better results with fewer prior features and greatly reduces the workload of feature engineering. \u2022 We propose a gating mechanism for integrating score-based features, which not only preserves the partial-order relationships between feature items, but also greatly reduces the amount of model training parameters. \u2022 Through ablation experiments, we verify the effectiveness of adding a BiLSTM layer to further improve the model's ability to capture long-term dependencies of input sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Grammatical error diagnosis models appeared as early as the 1980s. Early grammatical error diagnosis models used rule-based methods to check and correct grammatical errors (Naber D, 2003) . However, because the design of matching rules requires rich linguistic knowledge, it has become more and more difficult as well as time-consuming to design rules for such models. In order to deal with more complex error types, a series of grammatical error detection and correction models based on machine translation technology have been proposed. Brockett et al. (2006) proposed a model that uses Statistical Machine Tranalation (SMT) techniques to detect and correct grammatical errors, which deal with mass/count noun confusions by translating the incorrect phrases as a whole. Felice et al. (2014) proposed a model for grammatical error diagnosis which combines rule-based and SMT systems in a pipeline. The model first uses rules to detect errors and generate candidates. After the candidates are roughly screened by the n-gram language model, they are sent to the SMT model for further screening. In the end, candidates will be further selected through language models and filtering rules. In order to solve the CGED2018 shared task, Hu et al. (2018) proposed a sequence-to-sequence network to model the problem, and used a semi-supervised method to generate pseudo-grammatical error data for training the model. Models based on machine translation require a large-scale training corpus to train the model. Inspired by the powerful capabilities of Neural Machine Translation (NMT) in grammatical error diagnosis, Zheng et al. (2016) regarded CGED as a sequence labeling problem, and used the powerful feature learning ability of an LSTM network to model the input sequence, and achieved better results. Yang et al. (2017) incorporated more grammatical features into the model based on the BiLSTM-CRF framework. Based on the LSTM-CRF error detection model, Li et al. (2018) combined three error correction models: a rule-based model, an NMT GEC model, and an SMT GEC model. The three GEC models aid the BiLSTM-CRF model in marking possible error locations during the detection phase. Fu et al. (2018) designed a model that incorporates richer features and added a template matcher and probability fusion mechanism.",
"cite_spans": [
{
"start": 172,
"end": 187,
"text": "(Naber D, 2003)",
"ref_id": null
},
{
"start": 539,
"end": 561,
"text": "Brockett et al. (2006)",
"ref_id": "BIBREF0"
},
{
"start": 772,
"end": 792,
"text": "Felice et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 1231,
"end": 1247,
"text": "Hu et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 1610,
"end": 1629,
"text": "Zheng et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 1800,
"end": 1818,
"text": "Yang et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 1953,
"end": 1969,
"text": "Li et al. (2018)",
"ref_id": null
},
{
"start": 2180,
"end": 2196,
"text": "Fu et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similar to most previous models for CGED shared tasks, we treat the CGED problem as a sequence labeling problem, and use BiLSTM-CRF as the basic framework of BSGED. Specifically, for a given input sequence , which consists of a character sequence [ 1 , 2 , \u2026 , ], BSGED will output an equal-length sequence , which is composed of a label sequence [ 1 , 2 , \u2026 , ] composition. We adopt the BIO marking strategy, that is, for characters without grammatical errors, we mark them as 'O', and for a subsequence of grammatical errors, such as word selection errors, the initial characters will be marked as 'B-S', The remaining single characters will be marked as 'I-S'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "3.1"
},
{
"text": "Inspired by previous work, we use a BiLSTM network as the RNN unit to obtain the input character encoding sequence. The BiLSTM network has a strong ability to capture long-term dependencies of the input sequence. CRFs are widely used in a large number of natural language processing tasks, especially sequence-annotation tasks. With the addition of a CRF, the BiLSTM-CRF model can predict the input sequence more accurately. For example, the BiLSTM-CRF model can avoid incorrect sequence predictions beginning with \"I-X\". In terms of feature selection, we select some simple features, such as the POS tag sequence, POS Score, and PMI Score. Different from previous work, BSGED adopts the BERT model as the character encoder of the input sequence, and uses a novel fusion mechanism to incorporate score-based features. The model details are introduced in the next section. The framework of the base model adopted by BSGED is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 933,
"end": 941,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "3.1"
},
{
"text": "Unlike previous models based on the BiLSTM-CRF architecture, BSGED does not utilize overly complex feature engineering, but uses the novel BERT model to obtain a token embedding representation of the input sequence. As a pre-trained language model, BERT has been successfully applied to many natural language understanding tasks, such as Chinese spelling error correction (Zhang et al. 2020) . Due to its powerful semantic extraction capabilities, we utilize BERT as a semantic feature extractor, converting characters into vector representations. In order to preserve the long-term dependencies on the input sequence better, BSGED takes the final layer output of the BERT model as part of the BiLSTM input, instead of concatenating it with the output results of the other features through the BiLSTM network. Experiments verify that this operation can further improve the overall performance of BSGED.",
"cite_spans": [
{
"start": 372,
"end": 391,
"text": "(Zhang et al. 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-Encoder and Gating mechanism",
"sec_num": "3.2"
},
{
"text": "1 https://github.com/HIT-SCIR/ltp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-Encoder and Gating mechanism",
"sec_num": "3.2"
},
{
"text": "We use prior knowledge to calculate the POS features of the input sequence and the PMI features between adjacent words. Specifically, we first use the LTP word segmentation tool 1 to perform word segmentation processing on the input sequence, and then perform part-of-speech tagging on the segmented sequence. This step also makes use of the LTP library. We also integrate location information into the POS tags. For example, for a Chinese sequence 1 2 3 1 2 1 , the segmented sequence should be 1 2 3 -1 2 -1 . Assuming the POS information of word A, word B, and word C are , , respectively, then the result of POS labeling should be --. For the score-based features, we use the news corpus provided by SogouCS 2 as a large corpus to obtain prior-knowledge statistics. Similar to the approach of Yang et al. (2017) , for the POS Score feature, we first count the discrete probability distribution of the POS feature of each word, and use the probability value as its POS Score. Similarly, we count the co-occurrence frequency between every two words on the same large corpus, and use the normalized co-occurrence frequency score as the PMI Score of two adjacent words. It should be noted that we also merge the character position information in the vocabulary into these feature items.",
"cite_spans": [
{
"start": 797,
"end": 815,
"text": "Yang et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-Encoder and Gating mechanism",
"sec_num": "3.2"
},
{
"text": "We propose a novel fusion mechanism for scorebased features. For continuous score features, traditional models usually discretize them first, and then embed the discretized score into a low-dimensional space. However, this embedding method will lose the partial-order relationships between the scores. In addition, the size of the feature space will change with the discretization granularity and the original value range of the score, and the model will have more parameters to be trained. Our approach differs in that we retain the continuity of 2 https://www.sogou.com/labs/resource/cs.php score features and train a matrix \u2208 \u211d 2 * for each score-based feature, where is the preset embedding matrix dimension. For the -th character, the final score embedding vector is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-Encoder and Gating mechanism",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= [ ] *",
"eq_num": "(1)"
}
],
"section": "BERT-Encoder and Gating mechanism",
"sec_num": "3.2"
},
{
"text": "Where is the final embedding representation and is the position information of the character, = 0 for a \"B-Word\", and = 1 for an \"I-Word\". At this point, the role of score-based features is similar to an input gate (Hochreiter and Schmidhuber, 1997) . This strategy not only preserves the partial-order relationship of score features, but also greatly reduces the size of the parameter matrix. The composition structure of the features for the input sequence is shown in Figure 2 .",
"cite_spans": [
{
"start": 215,
"end": 249,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 471,
"end": 480,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT-Encoder and Gating mechanism",
"sec_num": "3.2"
},
{
"text": "Following our experiments, we find that for different initialization parameters, the prediction results of the model are highly variable. This observation is consistent with that of Yang et al. (2017) . In order to further improve the performance, we train multiple single models and use an ensemble mechanism to fuse them together. We adopt a simple and effective voting mechanism as our ensemble method, which improves the precision of the model while preserving the recall value.",
"cite_spans": [
{
"start": 182,
"end": 200,
"text": "Yang et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble mechanism",
"sec_num": "3.3"
},
{
"text": "In our final version, we use a total of four parameter groups, and we select 4 random factors for each group, so we finally merge 16 single models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble mechanism",
"sec_num": "3.3"
},
{
"text": "The ensemble mechanism may produce conflicting prediction results. To solve this problem, we perform post-processing operations on the results of the ensemble model. We adopt some rule-base schemes, which integrate prior knowledge simply and effectively. The main processing methods are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "First, in cases when some single models predict a sentence to be correct and other single models predict it to be incorrect, the conflict is resolved by retaining the prediction 'incorrect'. The 'correct' label is only output when all models predict the sentence as 'correct'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "Second, we resolve 'incorrect' predictions with overlaps, such as when the following two predictions are output for sentence :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "{ < 1 , 1 , 1 > < 2 , 2 , 2 > (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "Where is the starting position of the prediction, is the ending position, and is the predicted error type. When Equation 3 is established, BSGED believes that the two prediction results overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{ 1 = 2 1 \u2208 ( 2 , 2 ) \u22c1 2 \u2208 ( 1 , 1 )",
"eq_num": "(3)"
}
],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "When overlapping occurs, the model uses the segmentation boundary of the original sentence to filter. Suppose that the word segmentation boundary of sentence is = [ 1 , 2 , \u2026 , , \u2026 ] , that is, [ \u22121 : ] represents a word of the sentence. The model will retain the prediction result of < , , > which is more suitable for .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Processing",
"sec_num": "3.4"
},
{
"text": "We use all the data from the CGED2015-CGED2018 training and test sets, as well as the training data from CGED2020. More specifically, our training data consists of the following parts: all data from the CGED2015-2016 training set and test set, all data from the CGED2017-2018 training set, 50% of the CGED2017-2018 test set, and 20% of the CGED2020 training set. The validation set consists of 50% of the CGED2017-2018 test set and 80% of the CGED2020 training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "Since the training set of CGED2020 has the same data as the test set from CGED2017-2018, in order to prevent data leakage, we de-duplicate the training set. Following de-duplication, the training set contains 43925 samples, and the validation set contains 3843 samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "Since richer model initialization parameters result in more diverse predictions, thereby further improving the recall rate of the model after ensembling, we therefore choose two different BERT pretraining parameters to initialize our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Selection",
"sec_num": "4.2"
},
{
"text": "In addition to using the BERT-Base-Chinese version released by Google (Devlin et al., 2018) , we also use another version of Chinese BERT. In order to further promote the research and development of Chinese information processing, the HFL team released the Chinese pre-training model BERT-wwm (Cui et al., 2019) , which uses a Whole Word Masking technique, as well as models closely related to this technology: BERT-wwm-ext.",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 293,
"end": 311,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Selection",
"sec_num": "4.2"
},
{
"text": "BERT-wwm is trained on Chinese Wikipedia (including simplified and traditional characters) and LTP is used to perform word segmentation before masking is carried out on all Chinese characters that make up the same word. Similar to other BERT-based models, it has 12 layers, 768 hidden size, and 12 self-attention heads.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Selection",
"sec_num": "4.2"
},
{
"text": "We select the model parameters through the validation set results, which mainly include the selection of the filtering threshold during model integration. Since BSGED uses a total of 16 single models for integration, we first simply set the max filtering threshold to 10, and explore the performance of the model after integration within this range. The performance of the model on the validation set is shown in Table 3 and Figure 3 . It should be noted that when selecting parameters, we only paid attention to the performance of the model at the position level.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 425,
"end": 433,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Validation Results",
"sec_num": "4.3"
},
{
"text": "It can be seen that as the filtering threshold increases, as does the precision, and the resulting predictions are more reliable; and as the filtering threshold decreases, the recall rate of the results will increase, enabling the model to be able to cover more actual errors. A low threshold will encourage retention of a large number of over-detection errors, while a high threshold will filter out partially correct results during post-processing. When the filter threshold is in the middle of the range, the model can achieve a higher F1 value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation Results",
"sec_num": "4.3"
},
{
"text": "Finally, we select three fusion models by selecting the parameter group with the highest precision Table 2 : The performance of the three submissions on the official evaluation test data set. The scores in bold represent the best scores we obtained among all the participating teams. The \"Best Team\" row records the best scores among all participating teams for each task-specific evaluating metric. rate, the parameter group with the highest recall rate, and the parameter group with the highest F1 value. The results on the validation set are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 2",
"ref_id": null
},
{
"start": 554,
"end": 561,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Validation Results",
"sec_num": "4.3"
},
{
"text": "The final version of BSGED obtained the top F1 score at the detection level and was among the top 3 F1 scores at the identification level on the test set released by CGED2020. In addition, BSGED obtained the highest precision rate and recall rate among all error diagnosis evaluation levels except the precision rate at the Detection Level. The specific results are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 382,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Testing Results",
"sec_num": "4.4"
},
{
"text": "In order to evaluate the novel components of our approach, we conduct two sets of ablation experiments. The first set of ablation experiments focuses on the gating mechanism. We use 7 parameter groups from the 16 parameter groups from our original experiments. 7 single models use the traditional discretized embedding method for score-based features, and 7 single models used the novel gating approach we propose. The final comparison results are shown in Table 4 . The results show that the control group that uses the gating mechanism achieves higher F1 values at each level of error detection; the performance improvements at the detection level, identification level, and position level are 0.0173, 0.0371 and 0.0348 respectively, demonstrating the effectiveness of the gating mechanism.",
"cite_spans": [],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ablation Experiment",
"sec_num": "4.5"
},
{
"text": "The second set of ablation experiments shows the performance improvement brought about by the addition of the BiLSTM layer compared to the BERT-only model. Through connecting the encoded output of the BERT model to the BiLSTM layer, the model can further improve its ability to capture the long-term dependencies of the input sequence. We conduct an experimental comparison of the model with and without the connected BiLSTM layer. For this experiment, 4 single models use a BERT-CRF architecture, and 4 single models connect the BERT output to a BiLSTM (BSGED). The two single model groups use the same parameter settings. The comparison result is shown in Table 5 . As can be seen, the control group with the addition of the BiLSTM achieves F1 value improvements of 0.0159, 0.0316, and 0.0265 at the detection level, identification level, and position level, demonstrating the effectiveness of the BiLSTM layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 658,
"end": 665,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Experiment",
"sec_num": "4.5"
},
{
"text": "We found that different optimizations enable BSGED to solve different types of errors better. Among them, the gating mechanism directly retains the partial-order relationships of the original score-based features, so it has an improved ability for recognizing errors at character-or word-level. Some examples are shown in Table 6 . For example, in the first sentence in Table 5 : The influence of the BiLSTM layer in the BSGED on the model's results on the validation set. The value is the average of 4 models \"much love\") should be identified as being incorrect, with the correct phrase being \"\u6700\u7231\" (meaning \"favorite\"). Similarly, \"\u6cbf\" (meaning \"along\") and \"\u6ca1\" (meaning \"no\") are words formed with similar strokes. In the second example sentence, \" \u6cbf\" should be replaced with \"\u6ca1\", because the PMI score of \"\u6cbf\u6709\" is extremely low. In the third sentence, \"\u901f\u5ea6\u51cf\u901f\" is a word-level error, and the correct expression should be \"\u901f\u5ea6\u51cf\u6162\".",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 6",
"ref_id": "TABREF3"
},
{
"start": 370,
"end": 377,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "The addition of the BiLSTM layer enables the model to better capture the long-term dependencies of the input sequence so that the model has stronger processing capabilities for error samples that rely on semantic understanding and long-term dependencies. Some examples are shown in Table 7 . For example, in the first sentence, \"\u5728\u2026\u53bb\" should be identified as an incorrect expression in Chinese, with the correct structure being \"\u5230\u2026\u53bb\". Identifying this error that requires judging long-term dependencies of the text. Finally, the phrase \"\u9996\u6b4c\" in the second example is a common collocation, but in the example, through the semantic understanding of the last clause, \"\u9996\" should be identified as an R type error.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.6"
},
{
"text": "This paper describes our novel BSGED model for the CGED2020 shared task, which uses only a few and simple features, greatly reducing the workload of feature engineering for CGED; a gating mechanism is also proposed to retain the original partialorder relationships between score-based features and at the same time reduce the amount of model training parameters. In addition, we connect the sequence encoding result of the BERT model to the BiLSTM layer, which improves the BSGED model's ability to capture long-term dependencies of the input sequence. BSGED achieves the best F1 score at the detection level and the third highest F1 score at the identification level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "In the future, we intend to use the MLM model to build a model that includes grammatical error correction, and apply the natural language generation capabilities of the pre-trained language model to the task of correcting Chinese grammatical errors. In addition, we will also integrate more explicit grammatical rules, which will also greatly help the improvement of model performance. Table 7 : Some examples of errors that the model with BiLSTM layer can identify but the baseline model cannot",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "Special thanks to the NLP-TEA workshop for sharing work, which allows us to discuss technologies and jointly promote the development of solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Correcting ESL errors using phrasal SMT techniques",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21stInternational Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Brockett, William B Dolan, and Michael Gamon. 2006. Correcting ESL errors using phrasal SMT techniques. In Proceedings of the 21stInternational Conference on Computational Linguistics and the 44th annual meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics, 249-256 https://www.aclweb.org/an- thology/P06-1032/",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pre-training with whole word masking for chinese",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Che",
"middle": [
"W"
],
"last": "Liu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08101"
]
},
"num": null,
"urls": [],
"raw_text": "Cui Y, Che W, Liu T, et al. Pre-training with whole word masking for chinese bert[J]. arXiv preprint arXiv:1906.08101, 2019.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
],
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language un- derstanding[J]. arXiv preprint arXiv:1810.04805, 2018. http://dx.doi.org/10.18653/v1/N19-1423",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Grammatical error correction using hybrid systems and type filtering[C]. Association for Computational Linguistics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "\u00d8 E",
"middle": [],
"last": "Andersen",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1702"
]
},
"num": null,
"urls": [],
"raw_text": "Felice M, Yuan Z, Andersen \u00d8 E, et al. Grammatical error correction using hybrid systems and type fil- tering[C]. Association for Computational Linguis- tics, 2014. http://dx.doi.org/10.3115/v1/W14-1702",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gong",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "52--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fu R, Pei Z, Gong J, et al. Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble en- hancement[C]//Proceedings of the 5th Workshop on Natural Language Processing Techniques for Edu- cational Applications. 2018: 52-59.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "J]. Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural computation, 1997, 9(8): 1735- 1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Ling@ CASS Solution to the NLP-TEA CGED Shared Task",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "70--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu Q, Zhang Y, Liu F, et al. Ling@ CASS Solution to the NLP-TEA CGED Shared Task 2018[C]//Pro- ceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applica- tions. 2018: 70-76.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The conll2013 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The conll2013 shared task on grammatical error correc- tion. https://www.aclweb.org/anthology/W13-3601/",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The conll-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1701"
]
},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Chris- topher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14. http://dx.doi.org/10.3115/v1/W14-1701",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A hybrid system for Chinese grammatical error diagnosis and correction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Bao",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li C, Zhou J, Bao Z, et al. A hybrid system for Chinese grammatical error diagnosis and correction",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "//Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "60--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "//Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. 2018: 60-69.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Human factors and behavioral science: The UNIX\u2122 Writer's Workbench software: Rationale and design",
"authors": [
{
"first": "N",
"middle": [],
"last": "Macdonald",
"suffix": ""
}
],
"year": 1983,
"venue": "Bell System Technical Journal",
"volume": "62",
"issue": "6",
"pages": "1891--1908",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Macdonald N H. Human factors and behavioral sci- ence: The UNIX\u2122 Writer's Workbench software: Rationale and design[J]. Bell System Technical Journal, 1983, 62(6): 1891-1908.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A rule-based style and grammar checker",
"authors": [
{
"first": "D",
"middle": [],
"last": "Naber",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naber D. A rule-based style and grammar checker[J].",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Alibaba at IJCNLP-2017 task 1: Embedding grammatical features into LSTMs for Chinese grammatical error diagnosis task",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Y, Xie P, Tao J, et al. Alibaba at IJCNLP-2017 task 1: Embedding grammatical features into LSTMs for Chinese grammatical error diagnosis task[C]//Proceedings of the IJCNLP 2017, Shared Tasks. 2017: 41-46.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Spelling Error Correction with Soft-Masked BERT",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.82"
],
"arXiv": [
"arXiv:2005.07421"
]
},
"num": null,
"urls": [],
"raw_text": "Zhang S, Huang H, Liu J, et al. Spelling Error Correc- tion with Soft-Masked BERT[J]. arXiv preprint arXiv:2005.07421, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.82",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Chinese grammatical error diagnosis with long short-term memory networks",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Che",
"middle": [
"W"
],
"last": "Guo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016)",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng B, Che W, Guo J, et al. Chinese grammatical error diagnosis with long short-term memory net- works[C]//Proceedings of the 3rd Workshop on Nat- ural Language Processing Techniques for Educa- tional Applications (NLPTEA2016). 2016: 49-56. https://www.aclweb.org/anthology/W16-4907/",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "The base model of the BiLSTM-CRF framework used by BSGED Figure 2: Schematic diagram of the features used in BSGED",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "The influence of filtering threshold on precision, recall and F1 value.",
"num": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Detection Level</td><td/><td colspan=\"3\">Identification Level</td><td colspan=\"2\">Position Level</td></tr><tr><td/><td>Pre</td><td>Rec</td><td>F1</td><td>Pre</td><td>Rec</td><td>F1</td><td>Pre</td><td>Rec</td><td>F1</td></tr><tr><td>Run #1</td><td colspan=\"9\">0.8565 0.9757 0.9122 0.5571 0.8432 0.6709 0.2097 0.4648 0.2890</td></tr><tr><td>Run #1</td><td colspan=\"8\">0.9303 0.8478 0.8872 0.7018 0.5779 0.6339 0.4008 0.288</td><td>0.3351</td></tr><tr><td>Run #1</td><td colspan=\"9\">0.9739 0.5513 0.7041 0.7939 0.2975 0.4328 0.5757 0.1519 0.2404</td></tr><tr><td>Best Team</td><td colspan=\"9\">0.9875 0.9757 0.9122 0.7939 0.8432 0.6736 0.5757 0.4648 0.4041</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Performance of the BSGED on the validation set with different filtering thresholds"
},
"TABREF2": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "The influence of filtering threshold on the performance of the ensemble model"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Single</td><td/><td colspan=\"3\">Avg. Detection Level</td><td colspan=\"6\">Avg. Identification Level Avg. Position Level</td></tr><tr><td>Models Num</td><td>Type</td><td>Pre</td><td>Rec</td><td>F1</td><td>Pre</td><td>Rec</td><td>F1</td><td>Pre</td><td>Rec</td><td>F1</td></tr><tr><td>7</td><td>embed gating</td><td colspan=\"9\">0.8416 0.694 0.8264 0.7342 0.7772 0.6468 0.4958 0.5609 0.4164 0.2703 0.3275 0.7599 0.6609 0.4344 0.5238 0.4204 0.2249 0.2927</td></tr></table>",
"html": null,
"type_str": "table",
"text": "\"\u591a\u7231\" (meaning"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>Single</td><td>Type</td><td colspan=\"3\">Avg. Detection Level</td><td colspan=\"6\">Avg. Identification Level Avg. Position Level</td></tr><tr><td>Models Num</td><td/><td>Pre</td><td>Rec</td><td>F1</td><td>Pre</td><td>Rec</td><td>F1</td><td>Pre</td><td>Rec</td><td>F1</td></tr><tr><td/><td>BERT</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td>-CRF</td><td colspan=\"9\">0.8298 0.7171 0.7691 0.6487 0.4575 0.5361 0.4093 0.2380 0.3006</td></tr><tr><td/><td colspan=\"10\">BSGED 0.8174 0.7556 0.7850 0.6344 0.5141 0.5677 0.4010 0.2765 0.3271</td></tr></table>",
"html": null,
"type_str": "table",
"text": "The influence of the gating mechanism on the model's results on the validation set. The value is the average of 7 models."
},
"TABREF5": {
"num": null,
"content": "<table><tr><td>Original Sentence</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Some examples of errors that the gating mechanism can identify but the baseline model cannot"
}
}
}
}