ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:43.242800Z"
},
"title": "Non-Autoregressive Grammatical Error Correction Toward a Writing Support System",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Homma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {}
},
"email": "homma-hiroki@ed.tmu.ac.jp"
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {}
},
"email": "komachi@ed.tmu.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "There are several problems in applying grammatical error correction (GEC) to a writing support system. One of them is the handling of sentences in the middle of the input. Till date, the performance of GEC for incomplete sentences is not well-known. Hence, we analyze the performance of each model for incomplete sentences. Another problem is the correction speed. When the speed is slow, the usability of the system is limited, and the user experience is degraded. Therefore, in this study, we also focus on the non-autoregressive (NAR) model, which is a widely studied fast decoding method. We perform GEC in Japanese with traditional autoregressive and recent NAR models and analyze their accuracy and speed.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "There are several problems in applying grammatical error correction (GEC) to a writing support system. One of them is the handling of sentences in the middle of the input. Till date, the performance of GEC for incomplete sentences is not well-known. Hence, we analyze the performance of each model for incomplete sentences. Another problem is the correction speed. When the speed is slow, the usability of the system is limited, and the user experience is degraded. Therefore, in this study, we also focus on the non-autoregressive (NAR) model, which is a widely studied fast decoding method. We perform GEC in Japanese with traditional autoregressive and recent NAR models and analyze their accuracy and speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grammatical error correction (GEC) is a writing support method for language learners. In recent years, neural GEC has been actively researched owing to its ability to produce fluent text. For example, in Kiyono et al. (2019) , state-of-the-art correction accuracy was achieved by using a Transformer (Vaswani et al., 2017) , which is a powerful neural machine translation (NMT) model. Because the neural model can see the entire sequence, it can correct errors with long-range dependencies; these errors cannot be corrected by a statistical method that uses n-grams.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "Kiyono et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 300,
"end": 322,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, considering the application of GEC in a writing support system, we must consider how to handle incomplete sentences. It is easy to present the GEC result when the user finishes writing a sentence. However, in case of an incomplete sentence, the user will not know how to fix the sentence while writing it. If the system can perform GEC correctly for incomplete sentences, the results can be presented to the user. In most previ-ous studies, complete sentences have been evaluated, and the performance of GEC for incomplete sentences has not been researched.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, there is a problem that inference speed is slow in a conventional autoregressive (AR) decoder of a sequence-to-sequence model. Considering the application of GEC in a writing support system, a slower inference speed would restrict its utility or lower the usability of the model. In Gu et al. (2018) , a non-autoregressive (NAR) decoder that speeds up inference time by outputting all tokens simultaneously was proposed. Following the success of NAR models, in Gu et al. (2019) , Levenshtein Transformer, an NAR NMT model that iteratively deletes and inserts inputs, was proposed. Its usefulness was verified in machine translation and document summarization tasks.",
"cite_spans": [
{
"start": 296,
"end": 312,
"text": "Gu et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 474,
"end": 490,
"text": "Gu et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Moreover, fast GEC methods with sequence tagging using an NAR model have been proposed. In Awasthi et al. (2019) , GEC was regarded as a local sequence conversion task, and high-speed GEC was achieved by using an NAR model that iteratively adapted editing tags in parallel. In Omelianchuk et al. (2020) , NAR GEC was performed by repetitive tagging of editing operations on each token of an input sentence, and higher correction accuracy and faster correction speed than in previous studies were achieved. However, these methods exhibited good performance by narrowing down the target language to English and preparing the editing operations as tags using language knowledge in advance.",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "Awasthi et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 277,
"end": 302,
"text": "Omelianchuk et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we focus on the NAR model as a method for high-speed GEC. We perform GEC in Japanese using the NAR model that does not need to prepare editing operations in advance. We analyze the proposed method considering its application to writing support systems. In particular, we analyze the relationship between the correction accuracy and the inference speed, focusing on incomplete sentences, and evaluate the impact of hyperparameters on NAR models. The contributions of this study can be summarized as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We evaluate the performance of NAR and AR models for incomplete sentences in terms of accuracy and speed, aiming for the construction of a writing support system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that the Levenshtein Transformer that performs one-time iterative refinements can achieve fast and stable GEC by reducing the worst inference time by 6.0 seconds and the average inference time by 0.3 seconds compared with the method based on convolutional neural networks (CNNs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Using the NAR model for Japanese GEC, we find that it is better to present the GEC result when the number of input words is six or more because the accuracy is significantly reduced when the number is less than five.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AR NMT is a standard decoding method in the encoder-decoder model (Kalchbrenner and Blunsom, 2013) for sequence-to-sequence learning. This method uses a recurrent language model (Mikolov et al., 2010) during inference. Given an original sentence, X = {x 1 ,. . ., x T \u2032 }, and an objective sentence, Y = {y 1 , . . . , y T }, an AR NMT model calculates the target sentence as",
"cite_spans": [
{
"start": 66,
"end": 98,
"text": "(Kalchbrenner and Blunsom, 2013)",
"ref_id": "BIBREF7"
},
{
"start": 178,
"end": 200,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AR NMT",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(Y |X; \u03b8) = T +1 \u220f t=1 p(y t |y 0:t\u22121 , x 1:T \u2032 ; \u03b8),",
"eq_num": "(1)"
}
],
"section": "AR NMT",
"sec_num": "2.1"
},
{
"text": "where y 0 and y T +1 are special tokens representing the beginning and end of the sentence, respectively, and \u03b8 is the model's parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AR NMT",
"sec_num": "2.1"
},
{
"text": "NAR NMT (Gu et al., 2018 ) is a decoding method that generates each token independently and simultaneously. This method is attracting attention as a method to increase the speed of decoding.",
"cite_spans": [
{
"start": 8,
"end": 24,
"text": "(Gu et al., 2018",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NAR NMT",
"sec_num": "2.2"
},
{
"text": "In Gu et al. (2018) , the concept of fertility, which predicts how many words on the target side correspond to each word in the source side, was introduced. The decoding is performed as follows:",
"cite_spans": [
{
"start": 3,
"end": 19,
"text": "Gu et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NAR NMT",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(Y |X; \u03b8) = \u2211 f 1 ,...,f T \u2032 \u2208F ( T \u2032 \u220f t \u2032 =1 p F (f t \u2032 |x 1:T \u2032 ; \u03b8)\u2022 T \u220f t=1 p(y t |x 1 {f 1 }, . . . , x T \u2032 {f T \u2032 }; \u03b8) ) ,",
"eq_num": "(2)"
}
],
"section": "NAR NMT",
"sec_num": "2.2"
},
{
"text": "where F is the set of all fertility sequences that sum into the length of Y , and x{f } represents token x repeated f times. As described earlier, it is necessary to predict the target sentence length in the NAR decoding method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAR NMT",
"sec_num": "2.2"
},
{
"text": "Furthermore, NAR NMT involves a problem named the multimodality problem (Gu et al., 2018) . This problem causes errors (such as token repetitions and a lack of tokens) and significantly deteriorates accuracy compared with an AR decoder. To solve this problem, in recent studies, iteratively refining the output (Lee et al., 2018; Gu et al., 2019) and partially autoregressively outputting the sentence divided into segments (Ran et al., 2020) have been proposed. Knowledge distillation (KD) (Kim and Rush, 2016) is also used to address this problem . The output of the AR model is known to mitigate multimodality problems because diversity is suppressed such that the model can be easily learned (Ren et al., 2020) .",
"cite_spans": [
{
"start": 72,
"end": 89,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 311,
"end": 329,
"text": "(Lee et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 330,
"end": 346,
"text": "Gu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 491,
"end": 511,
"text": "(Kim and Rush, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 696,
"end": 714,
"text": "(Ren et al., 2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NAR NMT",
"sec_num": "2.2"
},
{
"text": "Levenshtein Transformer (Gu et al., 2019) is one of the most recent NAR NMT models 1 that introduces a workaround for the aforementioned multimodality problems. In Gu et al. (2019) , the usefulness of the Levenshtein Transformer in machine translation and summarization tasks was verified; however, its usefulness in GEC has not been verified yet.",
"cite_spans": [
{
"start": 24,
"end": 41,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 164,
"end": 180,
"text": "Gu et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "This model has a Transformer (Vaswani et al., 2017) block (T-block) as a primary component, and the original text is given to each T-block. First, the states coming from the lth T-block are as fol-lows:",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (l+1) 0 , h (l+1) 1 , . . . , h (l+1) n = { E y 0 + P 0 , E y 1 + P 1 , . . . , E yn + P n , l = 0 T-block l ( h (l) 0 , h (l) 1 , . . . , h (l) n ) , l > 0",
"eq_num": "(3)"
}
],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "where E and P are token and position embeddings, respectively; y 0 and y n are boundary tokens representing the start and end, respectively. Next, we use these decoder outputs, (h 0 , h 1 , . . . , h n ), to classify deletions, placeholders, and tokens. The deletion classifier uses softmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "( h i \u2022 A \u22a4 ) , (i = 1, . . . n \u2212 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "to perform binary classification of \"deleted\" or \"kept\" for tokens other than boundary tokens. Next, it deletes corresponding tokens. The placeholder classifier uses softmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "( concat (h i , h i+1 ) \u2022 B \u22a4 ) , (i = 0, . . . n \u2212 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "to classify how many placeholders to insert from 0 to K max at every consecutive position pair. Subsequently, it inserts the corresponding number of special tokens <PLH>, where K max is the maximum number of tokens that can be inserted at one time in one place, and we set it to 255. The token classifier uses softmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "( h i \u2022 C \u22a4 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": ", (\u2200y i = <PLH>) to classify and replace all special tokens <PLH> into words that are elements of vocabulary V. Here, A, B, and C are matrices for linearly transforming the number of dimensions of a state or a combination of two states into the number of classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Levenshtein Transformer",
"sec_num": "2.3"
},
{
"text": "GEC is a task to correct errors, such as punctuation, grammar, and word selection errors. Various methods have been studied for this task. In recent years, owing to the development of NMT, GEC is often interpreted as a machine translation task. Almost all studies using the BEA Shared Task-2019 datasets (Bryant et al., 2019) used Transformerbased models (Omelianchuk et al., 2020; Kiyono et al., 2019; Kaneko et al., 2020; Grundkiewicz et al., 2019; Choe et al., 2019; Li et al., 2019) . For example, in Li et al. (2019) , a system that combined a CNN-based model with a Transformerbased model was used, and the method in Chollampatt and Ng (2018) was adopted as the CNN architecture.",
"cite_spans": [
{
"start": 304,
"end": 325,
"text": "(Bryant et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 355,
"end": 381,
"text": "(Omelianchuk et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 382,
"end": 402,
"text": "Kiyono et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 403,
"end": 423,
"text": "Kaneko et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 424,
"end": 450,
"text": "Grundkiewicz et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 451,
"end": 469,
"text": "Choe et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 470,
"end": 486,
"text": "Li et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 505,
"end": 521,
"text": "Li et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GEC",
"sec_num": "2.4"
},
{
"text": "The following are previous studies on highspeed GEC using an NAR model. In Awasthi et al. (2019) , GEC was regarded as a local sequence conversion task, and it was rapidly solved by using a parallel iterative editing model. In Omelianchuk et al. (2020) , the same task was solved by iterative sequence tagging. However, both methods applied linguistic knowledge prepared in advance (such as suffix conversion rules and verb conjugation dictionaries). Thus, it is not easy to apply them to another language. In this study, we propose a method that uses only a training corpus.",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "Awasthi et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 227,
"end": 252,
"text": "Omelianchuk et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GEC",
"sec_num": "2.4"
},
{
"text": "This study aims to analyze the effectiveness of an NAR model for Japanese GEC in terms of accuracy and speed, assuming that NAR is used as a back-end of a writing support system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "In this section, we explain the workflow of a writing support system. Figure 1 shows a schematic diagram of the system, which consists of a frontend with a text input field and a back-end with a GEC system, and it works as follows. (1) The user inputs or deletes the text in the input field 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Writing Support System",
"sec_num": "3.1"
},
{
"text": "(2) The front-end detects the change; (3) it sends the changed sentence to the back-end. (4) The back-end performs GEC; (5) it sends the correction to the front-end. (6) The front-end checks for changes; (a) it suggests changes to the user if there are changes; (b) otherwise, it does nothing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Writing Support System",
"sec_num": "3.1"
},
{
"text": "In this section, we consider the system input ((1) and (2)) and response time ((2) to (6a)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges for the System",
"sec_num": "3.2"
},
{
"text": "We consider two problems with input from users of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Input",
"sec_num": null
},
{
"text": "The first is how to process a sentence in the middle of input. When the user finishes writing a sentence (in other words, when the user enters a line), the GEC result of the sentence should be presented to the user. However, it is unclear how to deal with an incomplete sentence. This is because the backend system may not perform accurate GEC owing to its incompleteness or shortness. Thus, we propose the following hypothesis: if the incomplete sentence is short, the correction accuracy deteriorates; however, if it is long, the correction accuracy approaches that of the complete sentence. We verify this hypothesis in Subsection 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Input",
"sec_num": null
},
{
"text": "The second is the problem of the Japanese input method. In Japanese, unlike English, user inputs are processed through a kana-kanji conversion system 3 . In other words, when the front-end is receiving the text through the kana-kanji conversion, it is not evident in what unit (character, word, phrase, or whole sentence) the errors can be appropriately detected 4 . In this study, we assume that users are intermediate Japanese learners and treat the input string as words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Input",
"sec_num": null
},
{
"text": "Response Time The processing speed from (2) to (6a) in the system flow dominates the response time of the system, which affects the user experience. It is known that not only is responsiveness required, but also users prefer a system with constant response speed over a system with variable response speed (Shneiderman, 1979) . We analyze the processing time of GEC in Subsection 4.3.",
"cite_spans": [
{
"start": 306,
"end": 325,
"text": "(Shneiderman, 1979)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Input",
"sec_num": null
},
{
"text": "Dataset We use data from the Lang-8 learner corpus (Mizumoto et al., 2011) . We use the TMU Evaluation Corpus for Japanese Learners (Koyama et al., 2020) for the validation and test sets 5 . All data, including the training set, are preprocessed as in Koyama et al. (2020) . Table 1 presents the number of sentences in the data. We use the same training set in our experiments with both complete and incomplete sentences.",
"cite_spans": [
{
"start": 51,
"end": 74,
"text": "(Mizumoto et al., 2011)",
"ref_id": "BIBREF15"
},
{
"start": 132,
"end": 153,
"text": "(Koyama et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 252,
"end": 272,
"text": "Koyama et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "To evaluate the performance of GEC for incomplete sentences, we segment the test data to the word level and then create incomplete sentences # of sentences # of corrections by increasing the number of words from the beginning. For example, 10 sentences are created from a 10-word sentence. Next, based on the word alignment between the source and target sentences, we create parallel sentences for incomplete sentences. Consequently, 9,710 sentence pairs are created. We use these data to evaluate the performance of GEC for incomplete sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Tokenization We tokenize data in all models as follows. First, we segment data into morpheme units using MeCab 6 (Ver. 0.996) using the Uni-Dic 7 (Ver. 2.2.0) as a dictionary. Next, we divide the morpheme units into subword units by applying the byte pair encoding (Sennrich et al., 2016) model for dealing with rare words. We apply character normalization (compatibility decomposition, followed by canonical composition) and share vocabulary between source and target sides. 8 The vocabulary size was set to 30,000 words. We use sentencepiece 9 for implementation.",
"cite_spans": [
{
"start": 265,
"end": 288,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 476,
"end": 477,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "In this study, we apply the Levenshtein Transformer (Gu et al., 2019) , which is a Transformer-based NAR neural model, to GEC. We update the model 300,000 times with a batch size of 64,000 tokens and select the model with the highest GLEU score (Napoles et al., 2016) for the validation set. Other hyperparameters are the same as in Gu et al. (2019) . We use publicly available PyTorch-based code 10 for implementation. In this paper, this model is called the LevT model. The maximum number of iterative refinements was set to nine in a previous study (Gu et al., 2019) . However, it is unclear whether it is the correct value for GEC because GEC is a local sequence conversion task in which almost all the source words remain in the target side. Therefore, we also evaluate the performance when the maximum iterative refinement number is changed.",
"cite_spans": [
{
"start": 52,
"end": 69,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 245,
"end": 267,
"text": "(Napoles et al., 2016)",
"ref_id": null
},
{
"start": 333,
"end": 349,
"text": "Gu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 552,
"end": 569,
"text": "(Gu et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NAR Model",
"sec_num": null
},
{
"text": "Training data are obtained by replacing the corrected sentences with the output of an AR model. We use it to train a KD model of LevT. The hyperparameters are the same as for the LevT model, and the model described in the next paragraph is used for the AR model. In this paper, this model is called the LevT+KD model. AR Model Because we focus on speeding up GEC, we adopt the CNN-based model (Chollampatt and Ng, 2018), which is faster than the Transformer-based model as the AR baseline. Unlike Chollampatt and Ng (2018) , the output is not reranked to match the conditions with the LevT model. Other hyperparameters are the same as in Chollampatt and Ng (2018) . We use publicly available PyTorch-based code 11 for implementation. In this paper, this model is called the CNN model.",
"cite_spans": [
{
"start": 497,
"end": 522,
"text": "Chollampatt and Ng (2018)",
"ref_id": "BIBREF3"
},
{
"start": 638,
"end": 663,
"text": "Chollampatt and Ng (2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NAR Model",
"sec_num": null
},
{
"text": "We use the GLEU score (Napoles et al., 2016) and the F 0.5 score, which weighs the precision as twice the recall, as evaluation metrics for the correction accuracy of GEC. We map words automatically using the ER-RANT 12 to calculate F 0.5 . However, because the ERRANT is designed for English, we cannot use it directly; instead, we specify the Levenshtein distance in the distance function that calculates the edit distance without using linguistic information. Furthermore, we measure the F 0.5 score in terms of the word-wise agreement.",
"cite_spans": [
{
"start": 22,
"end": 44,
"text": "(Napoles et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correction Evaluation",
"sec_num": null
},
{
"text": "We measure the inference speed with the following settings. We use the Intel \u00ae Xeon \u00ae processor E5-2660 without GPUs. To measure the performance in a realistic setting, we set the batch size to one sentence. We measure time using the built-in time module in Python. Specifically, the inference speed of one sentence is calculated as the change in the system clock from the input of the sentence before tokenization to the output of the GEC result. Table 3 : Number of errors, according to category, that each model modified correctly on the test set. \"M,\" \"R,\" and \"U\" represent missing, replacement, and unnecessary errors, respectively. Each number in parentheses represents the maximum error frequency 13 .",
"cite_spans": [],
"ref_spans": [
{
"start": 448,
"end": 455,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference Speed Evaluation",
"sec_num": null
},
{
"text": "To confirm GEC's effectiveness on complete sentences, we evaluate each model using the test set. Table 2 lists the results. Both GLEU and F 0.5 scores of LevT without KD are worse than those of CNN, but LevT+KD's score exceeds CNN's score. In terms of the recall and precision of LevT and LevT+KD, both are improved by KD, and the precision is significantly increased. Therefore, KD dramatically improves the precision in GEC.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Correction Accuracy",
"sec_num": "4.2"
},
{
"text": "For a more detailed analysis of the effect of KD, the number of categorical errors that the model correctly changed is presented in Table 3 . Comparing LevT and LevT+KD, it can be seen that the number of corrections for all types of errors has increased owing to KD. In particular, the correction accuracy for \"unnecessary\" errors has increased. We believe that this is because LevT+KD can inherit the correction accuracy for the \"unnecessary\" errors of the CNN by KD.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Correction Accuracy",
"sec_num": "4.2"
},
{
"text": "LevT+KD has a correction accuracy comparable to that of the CNN. As KD's effectiveness in the NAR model for GEC is confirmed, we focus only on LevT+KD for the NAR model in the subsequent experiments. Number of Iterative Refinements Unlike in the machine translation task (which was mainly addressed in the previous studies on NAR models), it seems that the necessary number of iterative refinements is reduced in the GEC task because the input and the output are close. Therefore, we evaluate the change in performance because of the number of iterative refinements in the LevT+KD model. Figure 2 shows the result of the GLEU and F 0.5 scores for the best epoch selected in the validation set. We can see that after the first iteration, the GLEU score does not change significantly with the maximum number of iterative refinements, and it is almost optimal when the number is three. Furthermore, similar to the GLEU score, we can see that the change in the F 0.5 score after the first iteration is small. Moreover, when the number of iterations is more than one, the score degrades with a decrease in precision. When the maximum number of iterations is one, the score becomes the maximum. Therefore, there is little need to increase the maximum number of iterative refinements of LevT+KD in GEC. One to three iterations are sufficient. Table 4 shows each model's overall correction accuracy for incomplete sentences. Compared with Table 2 , it can be seen that the overall tendency is the same: LevT+KD has a high GLEU score and a high precision, whereas CNN has a high recall. Furthermore, in both models, the GLEU score improves slightly, and the F 0.5 score deteriorates for incomplete sentences. Overall, the GEC model trained only on complete sentences is useful to some extent, even for incomplete sentences. Figure 3 shows each model's correction accuracy per sentence length. Comparing the incomplete sentences (b) with the complete sentences (a), we can see that the accuracy is considerably reduced when the sentence length is extremely short in both models. When presenting the GEC result for incomplete sentences, it is considered appropriate not to show it when the input sentence length is short. In addition, the correction accuracy for the complete sentences fluctuates substantially in the range of 31-50 words in both models. This might be attributed to the lack of test sentences. Figure 4 shows the inference speed of test data containing 9,710 incomplete sentences for each model. The average inference times are 0.49, 0.24, and 0.19 seconds for CNN, LevT+KD with the maximum number of iterations set as nine, and LevT+KD with the maximum iterations set as one, respectively. According to our results, the variance of the inference time of LevT+KD is significantly suppressed compared with that of CNN, and the average time is also significantly lower than that of CNN. The variance and average can be further suppressed by reducing the maximum number of iterations. Excluding the outliers, CNN also fits in approximately one second. However, the correspondence between the inference speeds of each model does not change, and LevT+KD is faster than CNN. Here, most sentences with outliers are long sentences created from sentences whose original length is 100 words or more, and we believe that the lengthy sentences are the leading cause of the increase in inference time. Figure 5 depicts the inference speed for each sentence length of each model. Focusing on the linear approximation, we find that CNN is faster than each LevT+KD when the sentence length is extremely short (one to four words). We assume that this is because the Levenshtein Transformer executes three types of operations: delete, insert a placeholder, and replace it with a token in one iterative refinement, thereby having more overhead than the CNN model does. However, the results show that each LevT+KD model is faster than CNN when the sentence length is five words or more. Here, we analyze the effect of sentence length on incomplete sentences in terms of both correction accuracy and speed. As shown in Figure 3b , when CNN's overall F 0.5 score for the incomplete sentence is used as the minimum criterion, the LevT+KD model's scores with five or fewer words are below the standard. Furthermore, Figure 5 shows that LevT+KD is consistently faster than CNN after six words or more. In other words, by using the LevT+KD model and performing GEC when six or more words are input, it is possible to present the correction results at high speed while maintaining a certain degree of correction accuracy 15 .",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 596,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1336,
"end": 1343,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1431,
"end": 1438,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1815,
"end": 1823,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 2400,
"end": 2408,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 3395,
"end": 3403,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 4104,
"end": 4113,
"text": "Figure 3b",
"ref_id": "FIGREF2"
},
{
"start": 4298,
"end": 4306,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Correction Accuracy",
"sec_num": "4.2"
},
{
"text": "We show examples of system output in Table 5 . In (1), a sentence in which \" ? desuka?\"",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "Learner's sentence ? CNN ? LevT+KD ? Corrected sentence ? \"Is this really important?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "Learner's sentence CNN LevT+KD Corrected sentence \"It's a cute, cheap and lovely fabric.\" is mistaken for \" ? daka?\" 16 is input. The correction differs depending on the model, but the outputs of both models are grammatically correct. In (2), a sentence in which \" kawaiku,\" which is a conjunctive form of \" kawaii\" (cute), is mistaken for \" kawaiiku\" is input. Both models changed the error part, but both outputs are grammatically incorrect. We believe that the reason for this is that there are few similar error examples in the training set. In the training set, there are 172 errors of \" ? daka?,\" whereas only two errors of \" kawaiikute.\" We assume that the performance of the sequence-to-sequence GEC method is limited by the number of similar errors in the training set. Table 6 presents an example of iterative refinements in LevT+KD. In the first insertion phase, a missing token error and repeated token errors have occurred. The repeated token, \" paatii\" (party), is deleted in the next deletion, and in the next insertion phase, the missing tokens, 16 The Japanese question marker particle, \" ka,\" cannot be added at the end of a sentence in the plain-style sentence.",
"cite_spans": [
{
"start": 1062,
"end": 1064,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 779,
"end": 786,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "17 insert shows the result of both inserting the placeholder and replacing it with the actual token. In addition, because it starts with an empty string, there is no delete in the first iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "yaki\" (bake) and \" wo\" (accusative case marker), are inserted to the left and right of \" paatii,\" recovering from the multimodality problem. However, erroneous parts remain: \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "takuyaki,\" which is a misspelling of \" takoyaki\" (octopus dumplings), is mistakenly corrected as \" takusan yaki.\" Moreover, \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "masu\" (politeness marker), which should be corrected to the past form corresponding to \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "kinou\" (yesterday), is not corrected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\"",
"sec_num": null
},
{
"text": "In this study, we investigated the applicability of the NAR model, which has a constant inference speed, to Japanese GEC toward constructing a writing support system. The experiments showed that the NAR model can obtain a correction accuracy that is equal to or better than that of the AR multilayer convolutional neural model. Furthermore, we demonstrated that the GEC model trained on complete sentences can also be applied to incomplete sentences. However, we found that when the number of input words is small, the correction accuracy is significantly lower than that of the complete sentence. Therefore, the system should defer presenting correction results for short sentences. We also showed that the worst inference time could be reduced by approximately 6.0 seconds, and the average inference time could be reduced by approximately 0.3 seconds in the NAR model that performs one-time iterative refinement compared with the AR model. Future work includes an extrinsic evaluation of the GEC system integrated into a writing support system. Moreover, we plan to investigate a largescale pretrained model to improve GEC's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In the original paper, it is called a \"partially autoregressive model\"; however, in this paper, we call it an NAR model because it is a model that outputs all tokens simultaneously when decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For simplicity, we assume that the user enters one sentence per line. In other words, line breaks divide the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The kana-kanji conversion system translates the input hiragana (the Japanese cursive syllabary) into kanji (Chinese characters) when necessary.4 The appropriate unit may change depending on the user's language learning level and Japanese input ability level.5 These data are less noisy than the corrected sentences included initially in the Lang-8 learner corpus and have multiple references to all sentences, which is considered useful for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://taku910.github.io/mecab/ 7 https://unidic.ninjal.ac.jp/ 8 As a preliminary experiment, the source side was set to the character unit, and the target side was set to the subword unit; however, the GLEU score(Napoles et al., 2016) was slightly decreased; therefore, we decided to tokenize both sides in the subword units. 9 https://github.com/google/sentencepiece 10 https://github.com/pytorch/fairseq/tree/master/ examples/nonautoregressive translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/nusnlp/mlconvgec2018 12 https://github.com/chrisjbryant/errant 13 The maximum number of corrections made by each of the three annotators is shown. The smallest ones are 210, 428, and 103.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the performance is stable because more sentences are used to evaluate the GEC model than complete sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentences of five or fewer words account for approximately 32.7% of the incomplete sentences used in this experiment. Furthermore, in reality, considering that long sentences are corrected many times, it is believed that the rate of short sentences of fewer than five words is even lower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We want to thank Yangyang Xi for consenting to use text from Lang-8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parallel iterative edit models for local sequence transduction",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Awasthi",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "Rasna",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Sabyasachi",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Vihari",
"middle": [],
"last": "Piratla",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4260--4270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Par- allel iterative edit models for local sequence trans- duction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4260-4270. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The BEA-2019 shared task on grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "\u00d8istein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "52--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A neural grammatical error correction system built on better pre-training and sequential transfer learning",
"authors": [
{
"first": "Yo Joong",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "Jiyeon",
"middle": [],
"last": "Ham",
"suffix": ""
},
{
"first": "Kyubyong",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Yeoil",
"middle": [],
"last": "Yoon",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "213--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Joong Choe, Jiyeon Ham, Kyubyong Park, and Yeoil Yoon. 2019. A neural grammatical error cor- rection system built on better pre-training and se- quential transfer learning. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 213-227. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5755--5762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt and Hwee Tou Ng. 2018. A multi- layer convolutional encoder-decoder neural network for grammatical error correction. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5755-5762. Association for the Advancement of Artificial Intelligence.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Nonautoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O.K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Levenshtein transformer",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Changhan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "11181--11191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11181- 11191. Curran Associates, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1700--1709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, pages 1700-1709. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4248--4254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4248- 4254. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sequencelevel knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An empirical study of incorporating pseudo data into grammatical error correction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1236--1242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical er- ror correction. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1236-1242. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Construction of an evaluation corpus for grammatical error correction for learners of Japanese as a second language",
"authors": [
{
"first": "Aomi",
"middle": [],
"last": "Koyama",
"suffix": ""
},
{
"first": "Tomoshige",
"middle": [],
"last": "Kiyuna",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Mio",
"middle": [],
"last": "Arai",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "204--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aomi Koyama, Tomoshige Kiyuna, Kenji Kobayashi, Mio Arai, and Mamoru Komachi. 2020. Construc- tion of an evaluation corpus for grammatical error correction for learners of Japanese as a second lan- guage. In Proceedings of The 12th Language Re- sources and Evaluation Conference, pages 204-211. European Language Resources Association.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1173--1182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The LAIX systems in the BEA-2019 GEC shared task",
"authors": [
{
"first": "Ruobing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yefei",
"middle": [],
"last": "Zha",
"suffix": ""
},
{
"first": "Yonghong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shiman",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "159--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruobing Li, Chuan Wang, Yefei Zha, Yonghong Yu, Shiman Guo, Qiang Wang, Yang Liu, and Hui Lin. 2019. The LAIX systems in the BEA-2019 GEC shared task. In Proceedings of the Fourteenth Work- shop on Innovative Use of NLP for Building Educa- tional Applications, pages 159-167. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "INTER-SPEECH 2010, 11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTER- SPEECH 2010, 11th Annual Conference of the Inter- national Speech Communication Association, pages 1045-1048. International Symposium on Computer Architecture.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mining revision log of language learning SNS for automated Japanese error correction of second language learners",
"authors": [
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoya Mizumoto, Mamoru Komachi, Masaaki Na- gata, and Yuji Matsumoto. 2011. Mining revi- sion log of language learning SNS for automated Japanese error correction of second language learn- ers. In Proceedings of 5th International Joint Con- ference on Natural Language Processing, pages 147-155. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "GECToR -grammatical error correction: Tag, not rewrite",
"authors": [
{
"first": "Kostiantyn",
"middle": [],
"last": "Omelianchuk",
"suffix": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Atrasevych",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Chernodub",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Skurzhanskyi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "163--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR -grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163-170. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to recover from multi-modality errors for non-autoregressive neural machine translation",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Qiu Ran",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3059--3069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2020. Learning to recover from multi-modality errors for non-autoregressive neural machine translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3059- 3069. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A study of nonautoregressive model for sequence generation",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jinglin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "149--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Ren, Jinglin Liu, Xu Tan, Zhou Zhao, Sheng Zhao, and Tie-Yan Liu. 2020. A study of non- autoregressive model for sequence generation. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 149- 159. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Human factors experiments in designing interactive systems",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Shneiderman",
"suffix": ""
}
],
"year": 1979,
"venue": "Computer",
"volume": "12",
"issue": "12",
"pages": "9--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Shneiderman. 1979. Human factors experi- ments in designing interactive systems. Computer, 12(12):9-19.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Understanding knowledge distillation in nonautoregressive machine translation",
"authors": [
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunting Zhou, Jiatao Gu, and Graham Neubig. 2020. Understanding knowledge distillation in non- autoregressive machine translation. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Schematic diagram of a writing support system."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Accuracy of LevT+KD model for the maximum number of iterative refinements. The solid red, dotted black, dash-dotted, and broken blue-black lines represent the GLEU score, precision, recall, and F 0.5 score, respectively."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Correction accuracy per sentence length breakdown for complete and incomplete sentences. The solid, dotted, and straight dash-dotted green lines represent the F 0.5 scores of LevT+KD and CNN, and CNN's overall F 0.5 score, respectively. The step-form graph represents the number of sentences."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Inference speed of each model. The number in parentheses in the model name represents the maximum number of iterations. The graph on the left does not consider outliers, and the graph on the right shows outliers as \"+\" in the range where the whiskers length exceeds 1.5 times the interquartile range."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Inference speed of each model per sentence length breakdown. Each straight line represents a linear approximation, and the number in parentheses in the model name represents the maximum number of iterations."
},
"TABREF2": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">M (242) R (441) U (124)</td></tr><tr><td>CNN</td><td>33</td><td>73</td><td>32</td></tr><tr><td>LevT</td><td>22</td><td>54</td><td>5</td></tr><tr><td>LevT+KD</td><td>33</td><td>79</td><td>20</td></tr></table>",
"text": "Correction accuracy of each model for complete sentences from the test set. \"Prec.\" and \"Rec.\" represent precision and recall, respectively.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Correction accuracy of each model for incomplete sentences.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Input: learner's sentence</td><td/><td/><td/><td/></tr><tr><td>insert 17 (0)</td><td/><td/><td/><td/></tr><tr><td>delete (1)</td><td/><td/><td/><td/></tr><tr><td>insert (1)</td><td/><td/><td/><td/></tr><tr><td>delete (2)</td><td/><td/><td/><td/></tr><tr><td>insert (2)</td><td/><td/><td/><td/></tr><tr><td>kinou</td><td>no yoru wa takusan</td><td>yaki paatii</td><td>paatii</td><td>wo shi masu</td></tr><tr><td>System output sentence</td><td colspan=\"2\">\"I have a lot of bake party last night.\"</td><td/><td/></tr><tr><td>Corrected sentence</td><td colspan=\"2\">\"I was at a Takoyaki party last night.\"</td><td/><td/></tr></table>",
"text": "Output examples for each model. Grammatical errors are underlined. Boldface represents where the model has changed the text. Double quotes represent the meaning of the sentence.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table/>",
"text": "Example of iterative refinements. The number of iterations is written in parentheses. Grammatical errors are underlined. Boldface represents the inserted word, and strikethrough represents the deleted word. Italics represent the Japanese pronunciations of each word, and double quotes represent the meaning of the sentence.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}