ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:45.425900Z"
},
"title": "Combining ResNet and Transformer for Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Shaolei",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "slwang9@iflytek.com"
},
{
"first": "Baoxin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "bxwang2@iflytek.com"
},
{
"first": "Jiefu",
"middle": [],
"last": "Gong",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "jfgong@iflytek.com"
},
{
"first": "Zhongyuan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "slwang@ir.hit.edu.cn"
},
{
"first": "Xiao",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "xiaohu2@iflytek.com"
},
{
"first": "Xingyi",
"middle": [],
"last": "Duan",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "xyduan@iflytek.com"
},
{
"first": "Zizhuo",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "zzshen@iflytek.com"
},
{
"first": "Gang",
"middle": [],
"last": "Yue",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "gangyue@iflytek.com"
},
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": "",
"affiliation": {},
"email": "rjfu@iflytek.com"
},
{
"first": "Dayong",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "sjwang3@iflytek.com"
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "gphu@iflytek.com"
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "Joint Laboratory of HIT and iFLYTEK Research (HFL)",
"institution": "iFLYTEK Research",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "tliu@ir.hit.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper introduces our system at NLPTEA-2020 Task: Chinese Grammatical Error Diagnosis (CGED). CGED aims to diagnose four types of grammatical errors which are missing words (M), redundant words (R), bad word selection (S) and disordered words (W). The automatic CGED system contains two parts including error detection and error correction. For error detection, our system is built on the model of multi-layer bidirectional transformer encoder and ResNet is integrated into the encoder to improve the performance. We also explore stepwise ensemble selection from libraries of models to improve the performance of the single model. For error correction, we design two models to recommend corrections for S-type and M-type errors separately. In official evaluation, our system obtains the highest F1 scores at identification level and position level for error detection, and the secondhighest F1 score at correction level.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper introduces our system at NLPTEA-2020 Task: Chinese Grammatical Error Diagnosis (CGED). CGED aims to diagnose four types of grammatical errors which are missing words (M), redundant words (R), bad word selection (S) and disordered words (W). The automatic CGED system contains two parts including error detection and error correction. For error detection, our system is built on the model of multi-layer bidirectional transformer encoder and ResNet is integrated into the encoder to improve the performance. We also explore stepwise ensemble selection from libraries of models to improve the performance of the single model. For error correction, we design two models to recommend corrections for S-type and M-type errors separately. In official evaluation, our system obtains the highest F1 scores at identification level and position level for error detection, and the secondhighest F1 score at correction level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Chinese language is commonly regarded as one of the most complicated languages. Compared to English, Chinese has neither singular/plural change, nor the tense changes of the verb. In addition, word segmentation usually has to be processed before deeper analysis, since word boundaries are not explicitly given in Chinese. All these problems make Chinese learning challenging to new learners. In recent years, more and more people with different language and knowledge background have become interested in learning Chinese as a second language. It is necessary to develop an automatic Chinese Grammatical Error Diagnosis (CGED) tool to help to identify and correct grammatical errors written by these people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to promote the development of automatic grammatical error diagnosis in Chinese learning, the Natural Language Processing Techniques for Educational Applications (NLP-TEA) have taken CGED as one of the shared tasks since 2014. Many methods have been proposed to solve CGED task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we introduce our system at NLPTEA-2020 CGED task. For error detection, our system is built on the model of multi-layer bidirectional transformer encoder and ResNet is integrated into the encoder to improve the performance. We also explore stepwise ensemble selection from libraries of models to improve the performance of the single model. For error correction, we design two models to recommend corrections for S-type and M-type errors separately. More specifically, we use the RoBERTa and the n-gram language model for the S-type correction, and utilize a combination of pretrained masked language model and a statistical language model to generate possible correction results for M-type correction. In official evaluation, our system obtains the highest F1 scores at identification level and position level for error detection, and the second-highest F1 score at correction level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows: Section 2 briefly introduces the CGED shared task. Section 3 talks about our methodology. Section 4 shows the experiment result. Section 5 shows the related work. Finally, the conclusion and future work are drawn in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of NLPTEA CGED task is to indicate errors in the sentences written by Chinese Foreign Language learners. The sentences contain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Grammatical Error Diagnosis",
"sec_num": "2"
},
{
"text": "Original Sentence Correct Sentence Table 1 : Typical Error Examples, where \"M\" means type of missing word, \"R\" means type of redundant word, \"S\" means type of word selection and \"W\" means type of disordered words. 3 Methodology",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Type",
"sec_num": null
},
{
"text": "M \u6bcf\u4e2a\u57ce\u5e02\u7684\u8d85\u5e02\u80fd\u770b\u5230\u8fd9\u4e9b\u98df\u54c1\u3002 \u6bcf\u4e2a\u57ce\u5e02\u7684\u8d85\u5e02\u90fd \u90fd \u90fd\u80fd\u770b\u5230\u8fd9\u4e9b\u98df\u54c1\u3002 R \u6211\u548c\u5988\u5988\u662f \u662f \u662f\u4e0d\u50cf\u522b\u7684\u6bcd\u5973\u3002 \u6211\u548c\u5988\u5988\u4e0d\u50cf\u522b\u7684\u6bcd\u5973\u3002 S \u6700\u91cd\u8981\u7684\u662f\u505a \u505a \u505a\u5b69\u5b50\u60f3\u5b66\u7684\u73af\u5883\u3002 \u6700\u91cd\u8981\u7684\u662f\u521b \u521b \u521b\u9020 \u9020 \u9020\u5b69\u5b50\u60f3\u5b66\u7684\u73af\u5883\u3002 W \"\u9759\u97f3\u73af\u5883\"\u662f \u662f \u662f\u5bf9 \u5bf9 \u5bf9\u4eba \u4eba \u4eba\u4f53 \u4f53 \u4f53\u5e94 \u5e94 \u5e94\u8be5 \u8be5 \u8be5\u6709\u5371\u5bb3\u7684\u3002 \"\u9759\u97f3\u73af\u5883\"\u5e94 \u5e94 \u5e94\u8be5 \u8be5 \u8be5\u662f \u662f \u662f\u5bf9 \u5bf9 \u5bf9\u4eba \u4eba \u4eba\u4f53 \u4f53 \u4f53\u6709\u5371\u5bb3\u7684\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Type",
"sec_num": null
},
{
"text": "We treat the error detection problem as a sequence tagging problem. Specifically, given a sentence x, we generate a corresponding label sequence y using the BIO encoding (Kim et al., 2004) . We then combine ResNet and transformer encoder to solve the tagging problem.",
"cite_spans": [
{
"start": 170,
"end": 188,
"text": "(Kim et al., 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Detection",
"sec_num": "3.1"
},
{
"text": "We use the multi-layer bidirectional transformer encoder (BERT) described in Vaswani et al. (2017) to encode the input sentence. As shown in Figure 1 (a), the model consists of three parts: an input embedding layer I, an encoder layer E and an output layer O. Given a sequence S = w 0 , ......, w N as input, the encoder is formulated as follows:",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 141,
"end": 150,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformer Encoder",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h 0 i = W e w i + W p (1) h l i = transformer block(h l\u22121 i ) (2) y BERT i = softmax(W o h L i + b o )",
"eq_num": "(3)"
}
],
"section": "Transformer Encoder",
"sec_num": null
},
{
"text": "where w i is a current token, and N denotes the sequence length. Equation 1 thus creates an input embedding. Here, transformer block includes selfattention and fully connected layers, and outputs h l i . l is the number of the current layer, l \u2265 1. L is the total number of layers of BERT. Equation 3 denotes the output layer. W o is an output weight matrix, b o is a bias for the output layer, and y BERT i is a grammatical error detection prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Encoder",
"sec_num": null
},
{
"text": "Deep neural networks learn different representations for each layer. For example, Belinkov et al. (2017) demonstrated that in a machine translation task, the low layers of the network learn to represent the word structure, while higher layers are more focused on word meaning. For tasks that emphasize the grammatical nature such as Chinese grammatical error detection, information from the lower layers is considered to be important. In this work, we use the residual learning framework (He et al., 2016) to combine the information from word embedding with the information from deep layer. Given a sequence S = w 0 , ......, w N as input, Res-BERT is formulated as follows:",
"cite_spans": [
{
"start": 82,
"end": 104,
"text": "Belinkov et al. (2017)",
"ref_id": "BIBREF0"
},
{
"start": 488,
"end": 505,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating ResNet",
"sec_num": null
},
{
"text": "h 0 i = W e w i + W p (4) h l i = transformer block(h l\u22121 i ) (5) R i = h L i \u2212 w i (6) H L i = concat(h L i , R i ) (7) y ResBERT n = softmax(W o H L i + b o ) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating ResNet",
"sec_num": null
},
{
"text": "Equation 6 denotes the residual learning framework, where the hidden output of h L i and the input embedding is used to approximate the residual functions. We then send the concatenation of h L i and R i to the output layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating ResNet",
"sec_num": null
},
{
"text": "We found that different random seeds and dropout values may result in different performances at the end of each training. It is straightforward to merge different model results to increase the performance. Rather than combine all the single models by weighted averaging, we use forward stepwise selection from the library of models (Caruana et al., 2004) to find a subset of models that yield excellent performance when averaged together. Library of models is generated using different random seeds and dropout values. The basic ensemble selection procedure is very simple:",
"cite_spans": [
{
"start": 332,
"end": 354,
"text": "(Caruana et al., 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stepwise Ensemble Selection from Libraries of Models",
"sec_num": null
},
{
"text": "1. Start with the empty ensemble.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stepwise Ensemble Selection from Libraries of Models",
"sec_num": null
},
{
"text": "2. Add to the ensemble the model in the library that maximizes the ensemble's performance to the Chinese grammatical error detection metric on validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stepwise Ensemble Selection from Libraries of Models",
"sec_num": null
},
{
"text": "Step 2 for a fixed number of iterations or until all the models have been used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "4. Return the ensemble from the nested set of ensembles that has maximum performance on the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "The voting system when selecting the best model to add at each step is span-level and it works as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "1. Each single model that tags a span of error text counts as a vote for that span of error text (e.g., if the word \"\u662f\" in a given position, is tagged as an R-type by one single model, then it receives one vote). Note that only the spans of text that have been recognized as an error type by any of the single model are considered as candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "2. Each candidate span of error text is tagged as a true error if it collected a minimum number of votes, like 30% * number of subset models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "The simple forward model selection procedure presented is effective, but sometimes overfits to the validation set, reducing ensemble performance on test set. To reduce the overfitting on the validation set, we make three additions to this selection procedure as described by Caruana et al. (2004) :",
"cite_spans": [
{
"start": 275,
"end": 296,
"text": "Caruana et al. (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "Selection with Replacement. With model selection without replacement, performance improves as the best models are added to the ensemble, peaks, and then quickly declines. Selecting models with replacement greatly reduces this problem. Selection with replacement allows the models to be added to the ensemble multiple times. This allows selection to fine-tune ensembles by weighting models: models added to the ensemble multiple times receive more weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "Sorted Ensemble Initialization. The simple forward model selection procedure starts with the empty ensemble. Forward selection sometimes overfits early in selection when ensembles are small. To prevent overfitting, we sort the models in the library by their performance, and put the N best model in the ensemble before the procedure. We use N = 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "Bagged Ensemble Selection. As the number of models in a library increases, the chances of finding combinations of models that overfit the validation set increases. Bagging can minimize this problem. We reduce the number of models by drawing a random sample of models from the library and selecting from that sample. If a particular combination of M models overfits, the probability of those M models being in a random bag of models is less than (1 \u2212 p) M for p the fraction of models in the bag. We use p = 0.5, and bag ensemble selection 20 times to insure that the best models will have many opportunities to be selected. The final ensemble is the average of the 20 ensembles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "3."
},
{
"text": "The systems are also required to recommend corrections for S-type and M-type errors. In this work, we design two different models to recommend corrections for S-type and M-type errors separately. We will describe them separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Correction",
"sec_num": "3.2"
},
{
"text": "For the S-type correction, we mainly use the RoBERTa and the n-gram language model. Firstly, we perform domain adaptation on the language model. We use CGED training sets from previous competitions to fine-tune RoBERTa-wwm, and combine the CGED data with news corpora to train a 5-gram language model. S-type correction includes single-character correction and multi-character correction. For the single-character correction, we consider the top 20 generated results of RoBERTa and 3,500 most frequent characters on L2 learner corpus as candidates. We score the candidates according to the prediction probability of RoBERTa and n-gram, visual similarity, and phonological similarity (Hong et al., 2019) . Afterward, we select the character with the highest score as the correction result. For the multi-character correction, we also select the top 20 characters generated by RoBERTa at each position. We put these characters together to form words and reserved those in the vocabulary as candidates. In addition to the four kinds of features at the single-character correction, we also consider Levenshtein distance between the error words and candidate words. ",
"cite_spans": [
{
"start": 683,
"end": 702,
"text": "(Hong et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S-type Correction",
"sec_num": null
},
{
"text": "Specially, we consider the correction of M-type errors as a cloze task and utilize a combination of pretrained masked language model and a statistical language model to generate possible correction results. Given suspected missing positions, we divide the correction process of M-type errors into two steps, firstly offering possible corrections, then evaluating and picking the most reasonable ones. When using pretrained masked language model, We first predict the number of missing characters at the suspected M-type error position through a BERT-based sequence labeling model. Then we add the same number of [MASK] symbols as predicted to the sentence before the position. Afterward, we use BERT to predict the most likely character of each [MASK] symbol, which is considered as correction candidates. When using statistical language models, we prepared a Chinese high-frequency vocabulary of L2 learners, and supplement all possible Chinese words from this vocabulary to the suspected M-type error position, generating a series of correction candidates. To evaluate the probability of each candidate, we use them to construct modified sentences and calculate the perplexity of the original sentence and all modified sentences using a statistical language model pretrained on L2 learner corpus. If the perplexity of modified sentence is significantly lower than the perplexity of the original sentence, which is controlled by a manual threshold, we consider the candidate as a predicted correction result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "M-type Correction",
"sec_num": null
},
{
"text": "Following the work of Fu et al. (2018) , We trained our single models using training units that contain both the erroneous and the corrected sentences from 2016 (HSK Track), 2017 and 2018 training data sets. CGED 2016 HSK track training set consists of 10,071 training units with a total of 24,797 grammatical errors, categorized as redundant (5,538 instances), missing (6,623), word selection (10,949) and word ordering (1,687). CGED 2017 training set consists of 10,449 training units Table 2 shows the overall data distribution in the training data. The sentences from 2017 testing data set are used for validation. It consists of 4,871 grammatical errors, categorized as redundant (1,060 instances), missing (1,269), word selection (2,156) and word ordering (386).",
"cite_spans": [
{
"start": 22,
"end": 38,
"text": "Fu et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The evaluation method includes four levels: Detection level. Determine whether a sentence is correct or not. If there is an error, the sentence is incorrect. All error types will be regarded as incorrect. Identification level. This level could be considered as a multi-class categorization problem. The correction situation should be exactly the same as the gold standard for a given type of error. Position level. The system results should be perfectly identical with the quadruples of the gold standard. Correction level. Characters marked as S and M need to give correct candidates. The model recommends at most 3 correction at each error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric",
"sec_num": "4.2"
},
{
"text": "The following metrics are measured at detection, identification, position-level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric",
"sec_num": "4.2"
},
{
"text": "F P F P + T N (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FalsePositiveRate =",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Accuracy = T P + T N T P + F P + T N + F N",
"eq_num": "(10)"
}
],
"section": "FalsePositiveRate =",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Precision = T P T P + F P (11) Recall = T P T P + F N (12) F1 = 2 * Precision * Recall Precision + Recall",
"eq_num": "(13)"
}
],
"section": "FalsePositiveRate =",
"sec_num": null
},
{
"text": "Since each team is allowed to submit three results, we run the stepwise ensemble selection for three times, according to the performance on detection level, identification level, position level separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FalsePositiveRate =",
"sec_num": null
},
{
"text": "We try different pre-trained model parameters as the transformer's initialization such as BERT (Devlin et al., 2018) , ELECTRA discriminator (Clark et al., 2020) and BERT-WWM (Cui et al., 2019) . We find that the models initialized with ELECTRA discriminator always achieve better performance. So we select ELECTRA discriminator as the transformer's initialization. More concretely, we use Chinese ELECTRA-Large discriminator model 1 with 1024 hidden units, 16 heads, 24 hidden layers, 324M parameters.",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 141,
"end": 161,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 175,
"end": 193,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.3"
},
{
"text": "For other parameters, we use streams of 128 tokens, a mini-batch of size 64, learning rate of 2e-5 and epoch of 120. We use 16 different random seeds and 5 different dropout values for each random seed to train 80 single models for the stepwise ensemble selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.3"
},
{
"text": "As shown in Table 3 , we build five baseline systems including: (1) BERT means single model initialized with BERT (Devlin et al., 2018) BERT-WWM means single model initialized with BERT-WWM (Cui et al., 2019) ; (3) ELECTRA means single model initialized with ELECTRA discriminator (Clark et al., 2020) ; 4ResELECTRA means single model with ResNet unit added; (5) WA Ensemble means simple weighed averaging ensemble model. Table 3 shows the overall performances of our model on the 2017 test data. The ELECTRA single model achieves much better performance than both the BERT single model and the BERT-WWM single model. We conjecture that ELECTRA discriminator is trained without masked tokens, and this makes it more suitable for CGED task which is very sensitive to surrounding words. The ResE-LECTRA single model achieves more than 2 point improvements on position level over the baseline ELECTRA single model, which proves the effectiveness of integrating ResNet unit. The stepwise selection ensemble model achieves almost 10 point improvements on position level over the best ResE-LECTRA single model. Even compared with WA ensemble model, the stepwise selection ensemble model also achieves more than 4 point improvements. Table 4 shows the performances on error detection. Our system achieves the best F1 scores at the identification level and position level. Although we achieve the highest position-level F1 score of 0.4041 among all teams, there still has a wide gap for our system to solve the Chinese grammatical error diagnosis. Table 5 shows the performances on error correction. We achieve the second-highest correction top1 score. Since we only provide zero or one candidate word, our correction top1 score is the same as our correction top3 score.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 190,
"end": 208,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 281,
"end": 301,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 422,
"end": 429,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1227,
"end": 1234,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1540,
"end": 1547,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Validation Results",
"sec_num": "4.4"
},
{
"text": "The researchers used many different methods to study the English Grammatical Error Correction task and achieved good results (Ng et al., 2014) . Compared with English, the research time of Chinese grammatical error diagnosis system is short, the data sets and effective methods are lacking. Chen et al. (2013) still used n-gram as the main method, and added Web resources to improve detection performance. Lin and Chu (2015) established a scoring system using n-gram, and get better correction options. In recent years, Chinese grammatical error diagnosis has been cited as a shared task of NLPTEA CGED. Many methods are proposed to solve this task (Yu et al., 2014; Lee et al., 2015 Lee et al., , 2016 . Zheng et al. (2016) proposed a BiLSTM-CRF model based on character embedding on bigram embedding. Shiue et al. (2017) combined machine learning with traditional n-gram methods, using Bi-LSTM to detect the location of errors and adding additional linguistic information, POS, ngram. used Bi-LSTM to generate the probability of each characters, and used two strategies to decide whether a character is correct or not. Liao et al. (2017) used the LSTM-CRF model to detect dependencies between outputs to better detect error messages. added more linguistic information on LSTM-CRF model such as POS, n-gram, PMI score and dependency features. Their system achieved the best F1-scores in identification level and position level on CGED2017 task. Fu et al. (2018) added richer features on BiLSTM-CRF model such as word segmentation, Gaussian ePMI, combination of POS and PMI. They also adopted a probabilistic ensemble approach to improve system performance. Their system achieved the best F1-score in identification level and position level on CGED2018 task.",
"cite_spans": [
{
"start": 125,
"end": 142,
"text": "(Ng et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 649,
"end": 666,
"text": "(Yu et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 667,
"end": 683,
"text": "Lee et al., 2015",
"ref_id": "BIBREF11"
},
{
"start": 684,
"end": 702,
"text": "Lee et al., , 2016",
"ref_id": "BIBREF10"
},
{
"start": 705,
"end": 724,
"text": "Zheng et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 803,
"end": 822,
"text": "Shiue et al. (2017)",
"ref_id": "BIBREF17"
},
{
"start": 1121,
"end": 1139,
"text": "Liao et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 1446,
"end": 1462,
"text": "Fu et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The paper describes our system on NLPTEA-2020 CGED task, which combines ResNet and BERT for Chinese Grammatical Error Diagnosis. We also design two different ensemble strategies to maximize the model's capability. At all six evaluating levels, we have the best F1 scores in identification level and position level, the second-highest F1 score in correction top1 level, the third-highest F1 score in detection level. In the future, we are planning to build a more powerful grammatical error diagnosis system with more training data and try to improve the system's ability by using the different cross-domain corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://github.com/ymcui/Chinese-ELECTRA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the organizers of CGED 2020 for their great job. We also thank the anonymous reviewers for insightful comments and suggestions. This work was supported by the National Key R&D Program of China via grant 2018YFB1005100, and the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "What do neural machine translation models learn about morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.03471"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural ma- chine translation models learn about morphology? arXiv preprint arXiv:1704.03471.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ensemble selection from libraries of models",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Crew",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Ksikes",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the twentyfirst international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. 2004. Ensemble selection from libraries of models. In Proceedings of the twenty- first international conference on Machine learning, page 18.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A study of language modeling for Chinese spelling check",
"authors": [
{
"first": "Hung-Shin",
"middle": [],
"last": "Kuan-Yu Chen",
"suffix": ""
},
{
"first": "Chung-Han",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hsin-Min",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "79--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuan-Yu Chen, Hung-Shin Lee, Chung-Han Lee, Hsin- Min Wang, and Hsin-Hsi Chen. 2013. A study of language modeling for Chinese spelling check. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 79-83, Nagoya, Japan. Asian Federation of Natural Lan- guage Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-training with whole word masking for chinese bert",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08101"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Zhengqi",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Jiefu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dechuan",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "52--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, and Ting Liu. 2018. Chinese grammatical error di- agnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement. In Proceedings of the 5th Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 52-59.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Faspell: A fast, adaptable, simple, powerful chinese spell checker based on daedecoder paradigm",
"authors": [
{
"first": "Yuzhong",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Xianguo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Neng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junhui",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)",
"volume": "",
"issue": "",
"pages": "160--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. Faspell: A fast, adaptable, sim- ple, powerful chinese spell checker based on dae- decoder paradigm. In Proceedings of the 5th Work- shop on Noisy User-generated Text (W-NUT 2019), pages 160-169.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Introduction to the bio-entity recognition task at jnlpba",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yuka",
"middle": [],
"last": "Tateisi",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the international joint workshop on natural language processing in biomedicine and its applications",
"volume": "",
"issue": "",
"pages": "70--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. 2004. Introduction to the bio-entity recognition task at jnlpba. In Pro- ceedings of the international joint workshop on nat- ural language processing in biomedicine and its ap- plications, pages 70-75. Citeseer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the nlp-tea 2016 shared task for chinese grammatical error diagnosis",
"authors": [
{
"first": "Gaoqi",
"middle": [],
"last": "Lung Hao Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Chih",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Endong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Li",
"middle": [
"Ping"
],
"last": "Xun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'16)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lung Hao Lee, Gaoqi Rao, Liang Chih Yu, Endong Xun, and Li Ping Chang. 2016. Overview of the nlp-tea 2016 shared task for chinese grammatical er- ror diagnosis. In Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Ed- ucational Applications (NLPTEA'16).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Guest editoral: Special issue on chinese as a foreign language",
"authors": [
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Computational Linguistics & Chinese Language Processing",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lung-Hao Lee, Liang-Chih Yu, and Li-Ping Chang. 2015. Guest editoral: Special issue on chinese as a foreign language. In International Journal of Com- putational Linguistics & Chinese Language Process- ing, Volume 20, Number 1, June 2015-Special Issue on Chinese as a Foreign Language.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "CVTE at IJCNLP-2017 task 1: Character checking system for Chinese grammatical error diagnosis task",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Suixue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guanyu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Tianyuan",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2017,
"venue": "Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Li, Peng Wang, Suixue Wang, Guanyu Jiang, and Tianyuan You. 2017. CVTE at IJCNLP-2017 task 1: Character checking system for Chinese grammatical error diagnosis task. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 78-83, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "YNU-HPCC at IJCNLP-2017 task 1: Chinese grammatical error diagnosis using a bi-directional LSTM-CRF model",
"authors": [
{
"first": "Quanlei",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jinnan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xuejie",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "73--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanlei Liao, Jin Wang, Jinnan Yang, and Xuejie Zhang. 2017. YNU-HPCC at IJCNLP-2017 task 1: Chinese grammatical error diagnosis using a bi-directional LSTM-CRF model. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 73-77, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A study on Chinese spelling check using confusion sets and?ngram statistics",
"authors": [
{
"first": "Chuan-Jie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Wei-Cheng",
"middle": [],
"last": "Chu",
"suffix": ""
}
],
"year": 2015,
"venue": "In International Journal of Computational Linguistics & Chinese Language Processing",
"volume": "20",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan-Jie Lin and Wei-Cheng Chu. 2015. A study on Chinese spelling check using confusion sets and?n- gram statistics. In International Journal of Compu- tational Linguistics & Chinese Language Process- ing, Volume 20, Number 1, June 2015-Special Issue on Chinese as a Foreign Language.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The conll-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Detection of Chinese word usage errors for non-native Chinese learners with bidirectional LSTM",
"authors": [
{
"first": "Yow-Ting",
"middle": [],
"last": "Shiue",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "404--410",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2064"
]
},
"num": null,
"urls": [],
"raw_text": "Yow-Ting Shiue, Hen-Hsen Huang, and Hsin-Hsi Chen. 2017. Detection of Chinese word usage errors for non-native Chinese learners with bidirectional LSTM. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 404-410, Vancou- ver, Canada. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Alibaba at IJCNLP-2017 task 1: Embedding grammatical features into LSTMs for Chinese grammatical error diagnosis task",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Pengjun",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Guangwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
}
],
"year": 2017,
"venue": "Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Pengjun Xie, Jun Tao, Guangwei Xu, Linlin Li, and Luo Si. 2017. Alibaba at IJCNLP-2017 task 1: Embedding grammatical features into LSTMs for Chinese grammatical error diagnosis task. In Pro- ceedings of the IJCNLP 2017, Shared Tasks, pages 41-46, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Overview of grammatical error diagnosis for learning chinese as a foreign language",
"authors": [
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang-Chih Yu, Lung-Hao Lee, and Li-Ping Chang. 2014. Overview of grammatical error diagnosis for learning chinese as a foreign language. In Pro- ceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications, pages 42-47.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Chinese grammatical error diagnosis with long short-term memory networks",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016)",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Zheng, Wanxiang Che, Jiang Guo, and Ting Liu. 2016. Chinese grammatical error diagnosis with long short-term memory networks. In Proceed- ings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016), pages 49-56, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "Data statistics",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "Validation Results using single models and ensemble methods. \"S Ensemble\" denotes for Stepwise ensemble model.",
"num": null,
"content": "<table><tr><td>with a total of 26,448 grammatical errors, cate-</td></tr><tr><td>gorized as redundant (5,852 instances), missing</td></tr><tr><td>(7,010), word selection (11,591) and word ordering</td></tr><tr><td>(1,995). CGED 2018 training set consists of 1,067</td></tr><tr><td>grammatical errors, categorized as redundant (208</td></tr><tr><td>instances), missing (298), word selection (87) and</td></tr><tr><td>word ordering (474).</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "Error detection performances of Submitted Runs on Official Evaluation Testing data sets. \"Best Team\" row records the best scores among all participant teams at each task-specific evaluating metric.",
"num": null,
"content": "<table><tr><td>runs</td><td/><td>Correction Top1</td><td/><td/><td>Correction Top3</td><td/></tr><tr><td/><td>Precision</td><td>Recall</td><td>F1</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>1</td><td>0.246</td><td>0.1149</td><td>0.1567</td><td>0.246</td><td>0.1149</td><td>0.1567</td></tr><tr><td>2</td><td>0.2105</td><td>0.1540</td><td>0.1779</td><td>0.2105</td><td>0.1540</td><td>0.1779</td></tr><tr><td>3</td><td>0.2290</td><td>0.1575</td><td>0.1867</td><td>0.2290</td><td>0.1575</td><td>0.1867</td></tr><tr><td>Best Team</td><td>-</td><td>-</td><td>0.1891</td><td>-</td><td>-</td><td>0.1885</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "Error correction performances of Submitted Runs on Official Evaluation Testing data sets. \"Best Team\" row records the best scores among all participant teams at each task-specific evaluating metric.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}