Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:40:05.324708Z"
},
"title": "Grammatical Error Detection Using Error-and Grammaticality-Specific Word Embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {}
},
"email": "kaneko-masahiro@ed"
},
{
"first": "Yuya",
"middle": [],
"last": "Sakaizawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {}
},
"email": "sakaizawa-yuya@ed"
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {}
},
"email": "komachi@tmu.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns. Most existing algorithms for learning word embeddings usually model only the syntactic context of words so that classifiers treat erroneous and correct words as similar inputs. We address the problem of contextual information by considering learner errors. Specifically, we propose two models: one model that employs grammatical error patterns and another model that considers grammaticality of the target word. We determine grammaticality of n-gram sequence from the annotated error tags and extract grammatical error patterns for word embeddings from large-scale learner corpora. Experimental results show that a bidirectional long-short term memory model initialized by our word embeddings achieved the state-of-the-art accuracy by a large margin in an English grammatical error detection task on the First Certificate in English dataset.",
"pdf_parse": {
"paper_id": "I17-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns. Most existing algorithms for learning word embeddings usually model only the syntactic context of words so that classifiers treat erroneous and correct words as similar inputs. We address the problem of contextual information by considering learner errors. Specifically, we propose two models: one model that employs grammatical error patterns and another model that considers grammaticality of the target word. We determine grammaticality of n-gram sequence from the annotated error tags and extract grammatical error patterns for word embeddings from large-scale learner corpora. Experimental results show that a bidirectional long-short term memory model initialized by our word embeddings achieved the state-of-the-art accuracy by a large margin in an English grammatical error detection task on the First Certificate in English dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grammatical error detection that can identify the location of errors is useful for second language learners and teachers. It can be seen as a sequence labeling task, which is typically solved by a supervised approach. For example, achieved the state-of-theart accuracy in English grammatical error detection using a bidirectional long-short term memory Table 1 : Cosine similarity of phrase pairs for each word embedding method.",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(Bi-LSTM) neural network. Their approach uses word embeddings learned from a large-scale native corpus to address the data sparseness problem of learner corpora. However, most of the word embeddings, including the one used by , model only the context of the words from a raw corpus written by native speakers, and do not consider specific grammatical errors of language learners. This leads to the problem wherein the word embeddings of correct and incorrect expressions tend to be similar (Table 1 , columns W2V and C&W) so that the classifier must decide grammaticality of a word from contextual information with a similar input vector.",
"cite_spans": [],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "(Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this problem, we introduce two methods: 1) error-specific word embeddings (EWE), which employ grammatical error patterns, that is to say the word pairs that learners tend to easily confuse; 2) grammaticalityspecific word embeddings (GWE), which consider grammatical correctness of n-grams. In this paper, we use the term grammaticality to refer to the correct or incorrect label of the target word given its surrounding context. We also combine these methods, which we will refer to as error-and grammaticality-specific word embeddings (E&GWE). Table 1 shows the cosine similarity of phrase pairs using word2vec (W2V), C&W embeddings (Collobert and Weston, 2008) , EWE, GWE, and E&GWE 1 . It illustrates that EWE, GWE, and E&GWE are able to distinguish between correct and incorrect phrase pairs while maintaining the contextual relation. Furthermore, we conducted experiments using the large-scale Lang-8 2 English learner corpus. The results demonstrated that representation learning is crucial for exploiting a noisy learner corpus for grammatical error detection.",
"cite_spans": [
{
"start": 645,
"end": 673,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 556,
"end": 563,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this study are summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We achieve the state-of-the-art accuracy in grammatical error detection on the First Certificate in English dataset (FCE-public) using a Bi-LSTM model initialized using our word embeddings that consider grammaticality and error patterns extracted from the FCE-public corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrate that updating word embeddings using error patterns extracted from the Lang-8 (Mizumoto et al., 2011) in addition to FCE-public corpora greatly improves grammatical error detection. \u2022 The proposed word embeddings can distinguish between correct and incorrect phrase pairs. \u2022 We have released our code and learned word embeddings 3 . The rest of this paper is organized as follows: in Section 2, we first give a brief overview of English grammatical error detection; Section 3 describes our grammatical error detection model using error-and grammaticality-specific word embeddings; Section 4 evaluates this model on the FCE-public dataset, and Section 5 presents an analysis of the grammatical error detection model and learned word embeddings; and Section 6 concludes this paper.",
"cite_spans": [
{
"start": 94,
"end": 117,
"text": "(Mizumoto et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many studies on grammatical error detection try to address specific types of grammatical errors (Tetreault and Chodorow, 2008; Han et al., 2006; Kochmar and Briscoe, 2014) . In contrast, target all errors using a Bi-LSTM, whose embedding layer is initialized with word2vec. We also address unrestricted grammatical error detection; however, we focus on learning word embeddings that consider a learner's error pattern and grammaticality of the target word. In this paper, subsequently, our word embeddings give statistically significant improvements over their method using exactly the same training data.",
"cite_spans": [
{
"start": 96,
"end": 126,
"text": "(Tetreault and Chodorow, 2008;",
"ref_id": "BIBREF16"
},
{
"start": 127,
"end": 144,
"text": "Han et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 145,
"end": 171,
"text": "Kochmar and Briscoe, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Several studies considering grammatical error patterns in language learning have been performed. For example, Sawai et al. (2013) suggest correction candidates for verbs using the learner error pattern, and Liu et al. (2010) automatically correct verb selection errors in English essays written by Chinese students learning English, based on the error patterns created from a synonym dictionary and an English-Chinese bilingual dictionary. The main difference between these previous studies and ours is that the previous studies focused only on verb selection errors.",
"cite_spans": [
{
"start": 110,
"end": 129,
"text": "Sawai et al. (2013)",
"ref_id": "BIBREF15"
},
{
"start": 207,
"end": 224,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "As an example of research on learning word embeddings that consider grammaticality, Alikaniotis et al. 2016proposed a model for constructing word embeddings by considering the importance of each word in predicting a quality score for an English learner's essay. Their approach learns word embedding from a document-level score using the mean square error whereas we learn word embeddings from a word-level binary error information using the hinge loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "The use of a large-scale learner corpus on grammatical error correction is described in works such as Xie et al. (2016) and Chollampatt et al. (2016a,b) . These studies used the Lang-8 corpus as training data for phrase-based machine translation (Xie et al., 2016) and neural network joint models (Chollampatt et al., 2016a,b) . In our study, Lang-8 was used to extract error patterns that were then utilized to learn word embeddings. Our experiments show that Lang-8 cannot be used as a reliable annotation for LSTM-based classifiers. Instead, we need to extract useful information as error patterns to improve the performance of error detection.",
"cite_spans": [
{
"start": 102,
"end": 119,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 124,
"end": 152,
"text": "Chollampatt et al. (2016a,b)",
"ref_id": null
},
{
"start": 246,
"end": 264,
"text": "(Xie et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 297,
"end": 326,
"text": "(Chollampatt et al., 2016a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Error-and Grammaticality-Specific Word Embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Error Detection Using",
"sec_num": "3"
},
{
"text": "In this section, we describe the details of the proposed word embeddings: EWE, GWE, and E&GWE. These models extend an existing word Figure 1 : Architecture of our learning methods for word embeddings (a) EWE and (b) GWE. Both models concatenate the word vectors of a sequence for window size and feed them into the hidden layer. Then, EWE outputs a scalar value, and GWE outputs a prediction of the scalar value and the label of the word in the middle of the sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammatical Error Detection Using",
"sec_num": "3"
},
{
"text": "embedding learning algorithm called C&W Embeddings (Collobert and Weston, 2008) and learn word embeddings that consider grammatical error patterns and grammaticality of the target word. We first describe the well-known C&W embeddings, and then explain our extensions. Finally, we introduce how we incorporate the learned word embeddings to the grammatical error detection task using a Bi-LSTM.",
"cite_spans": [
{
"start": 51,
"end": 79,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Error Detection Using",
"sec_num": "3"
},
{
"text": "Collobert and Weston (2008; 2011) proposed a window-based neural network model that learns distributed representations of target words based on the local context. Here, target word w t is the central word in the window sized sequence of words S = (w 1 , . . . , w t , . . . , w n ). The representation of the target word w t is compared with the representations of other words that appear in the same sequence (\u2200w i \u2208 S|w i \u0338 = w t ). A negative sample",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "S \u2032 = (w 1 , ..., w c , ..., w n |w c \u223c V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "is created by replacing the target word w t with a randomly selected word from the vocabulary V to distinguish between the negative sample S \u2032 and the original word sequence S. In their method, the word sequence S and the negative sample S \u2032 are converted into vectors in the embedding layer, which are fed as embeddings. They concatenate each converted vector and treat it as input vector x \u2208 R n\u00d7D , where D is the dimension of the embedding layer. The input vector x is then subjected to a linear transformation (Eq. (1)) to calculate the vector i of the hidden layer. Then, the resulting vector is subjected to another linear transformation (Eq. (2)) to obtain the output f (x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "i = \u03c3(W hx x + b h ) (1) f (x) = W oh i + b o (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "Here, W hx is the weight matrix between the input vector and the hidden layer, W oh is the weight matrix between the hidden layer and the output layer, b o and b h are biases, and \u03c3 is an element-wise nonlinear function tanh. This model for word representation learns distributed representations by making the ranking of the original word sequence S higher than that of the negative samples S \u2032 , which includes noise due to replaced words. The difference between the original word sequence and the word sequence including noise is optimized to be at least 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "loss c (S, S \u2032 ) = max(0, 1 \u2212 f (x) + f (x \u2032 )) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "Here, x \u2032 is a transformed vector at the embedding layer obtained by converting the word w c of the negative sample S \u2032 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "Our proposed models learn distributed representations using the same hinge loss (Eq. (3)) so the model could distinguish between correct and incorrect phrase pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C&W Embeddings",
"sec_num": "3.1"
},
{
"text": "EWE learns word embeddings using the same model as C&W embeddings. However, rather than creating negative samples randomly, we created them by replacing the target word w t with words w c that learners tend to easily confuse with the target word w t . In such a case, w c is sampled by the conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error-Specific Word Embeddings (EWE)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w c |w t ) = |w c , w t | \u2211 wc\u2032 |w c \u2032, w t |",
"eq_num": "(4)"
}
],
"section": "Error-Specific Word Embeddings (EWE)",
"sec_num": "3.2"
},
{
"text": "where, w t is a target word, w c \u2032 is a set of w c regarding w t . This model learns to distinguish between a correct and an incorrect word by considering error patterns. Replacement candidates, treated as error patterns, are extracted from a learner corpus annotated with correction. Figure 1a represents architecture of EWE.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 294,
"text": "Figure 1a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error-Specific Word Embeddings (EWE)",
"sec_num": "3.2"
},
{
"text": "The bus will pick you up right at your hotel entery/*entrance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error-Specific Word Embeddings (EWE)",
"sec_num": "3.2"
},
{
"text": "The above sentence is a simple example from the test data of FCE-public corpus. In this sentence, the word \"entery\" is incorrect and the \"entrance\" is the correct word. In this case, w t is \"entrance\" and w c is \"entery\". Note that we use only one-toone (substitution) error patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error-Specific Word Embeddings (EWE)",
"sec_num": "3.2"
},
{
"text": "Due to the data sparseness problem, the context of infrequent words cannot be properly learned. This problem is solved by using a large corpus to pre-train word2vec. By fine-tuning vectors whose contexts have already been learned, it is possible to learn word embeddings with no or few replacement candidates in a learner corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error-Specific Word Embeddings (EWE)",
"sec_num": "3.2"
},
{
"text": "Similar to the approach of Alikaniotis et al. (2016) for essay score prediction, we extend C&W embeddings to distinguish between correct words and incorrect words by including grammaticality in distributed representations (Figure 1b ). For that purpose, we add an additional output layer to predict grammaticality of word sequences, and extend Equation (3) to calculate following two error func-tions.",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "Alikaniotis et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 222,
"end": 232,
"text": "(Figure 1b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammaticality-Specific Word Embeddings (GWE)",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f grammar (x) = W gh i + b g (5) y = softmax(f grammar (x)) (6) loss p (S) = \u2212 \u2211 y \u2022 log(\u0177) (7) loss(S, S \u2032 ) = \u03b1 \u2022 loss c (S, S \u2032 ) + (1 \u2212 \u03b1) \u2022 loss p (S)",
"eq_num": "(8)"
}
],
"section": "Grammaticality-Specific Word Embeddings (GWE)",
"sec_num": "3.3"
},
{
"text": "In Equation 5, f grammar is the predicted label of the original word sequence S. W gh is the weight matrix and b g is the bias. In Equation 6, the prediction probability\u0177 is computed using the softmax function for f grammar . The error loss p is computed using the cross-entropy function using the gold label's vector y of the target word (Eq. (7)). Finally, two errors are combined to calculate loss (Eq. (8)). Here, \u03b1 is a hyperparameter that determines the weight of the two error functions. We use the original tag label (0/1) of the FCEpublic data as the grammaticality of word sequences for learning. Note that we do not use label information from Lang-8, because the error annotation of Lang-8 error annotations are too noisy to train an error detection model directly from the corpus. Negative examples of GWE are created randomly, that are similar to the case with C&W.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammaticality-Specific Word Embeddings (GWE)",
"sec_num": "3.3"
},
{
"text": "Word Embeddings (E&GWE) E&GWE is a model that combines EWE and GWE. In particular, E&GWE model creates negative examples using an error pattern as in EWE and outputs score and predicts grammaticality as in GWE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error-and Grammaticality-Specific",
"sec_num": "3.4"
},
{
"text": "We use bidirectional LSTM (Bi-LSTM) (Graves and Schmidhuber, 2005) as a classifier for all our experiments for English grammatical error detection, because Bi-LSTM demonstrates the state-of-the-art accuracy for this task compared to other architectures such as CRF and CNNs . The LSTM calculation is expressed as follows:",
"cite_spans": [
{
"start": 36,
"end": 66,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i t = \u03c3(W ie e t + W ih h t\u22121 + W ic c t\u22121 + b i ) (9) f t = \u03c3(W f e e t + W f h h t\u22121 + W f c c t\u22121 + b f )",
"eq_num": "(10)"
}
],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "Figure 2: A bidirectional LSTM network. The word vectors e i enter the hidden layer to predict the labels of each word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = i t \u2299 g(W ce e t +W ch h t\u22121 + b c ) + f t \u2299 c t\u22121 (11) o t = \u03c3(W oe e t + W oh h t\u22121 + W oc c t + b o ) (12) h t = o t \u2299 h(c t )",
"eq_num": "(13)"
}
],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "Here, e t is the word embedding of word w t , and W ie , W f e , W ce and W oe are weight matrices. Each b i , b f , b c and b o are biases. An LSTM cell block has an input gate i t , a memory cell c t , a forget gate f t and an output gate o t to control information flow. In addition, g and h are the sigmoid function and \u03c3 is the tanh. \u2299 is the pointwise multiplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "We apply a bidirectional extension of LSTM, as shown in Figure 2 , to encode the word embedding e i from both left-to-right and right-to-left directions.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 64,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = W yh (h L t \u2297 h R t ) + b y",
"eq_num": "(14)"
}
],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "The Bi-LSTM model maps each word w t to a pair of hidden vectors h L t and h R t , i.e., the hidden vector of the left-to-right LSTM and right-to-left LSTM, respectively. \u2297 is the concatenation. W yh is a weight matrix and b y is a bias. We also added an extra hidden layer for linear transformation between each of the composition function and the output layer, as discussed in the previous study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional LSTM (Bi-LSTM)",
"sec_num": "3.5"
},
{
"text": "We used the FCE-public dataset and the Lang-8 English learner corpus to train classifiers and word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "For this evaluation, we used the test set from the FCE-public dataset (Yannakoudakis et al., 2011) for all experiments.",
"cite_spans": [
{
"start": 70,
"end": 98,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "FCE-public dataset. First, we compared the proposed methods (EWE, GWE, and E&GWE) to previous methods (W2V and C&W) relative to training word embeddings (see Table 2a ). For this purpose, we trained our word embeddings and a classifier, which were initialized using pre-trained word embeddings, with the training set from the FCE-public dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Table 2a",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "This dataset is one of the most famous English learner corpus in grammatical error correction. It contains essays written by English learners. It is annotated with grammatical errors along with error classification. We followed the official split of the data: 30, 953 sentences as a training set, 2, 720 sentences as a test set, and 2, 222 sentences as a development set. In the FCE-public dataset, the number of target words of error patterns is 4,184, the number of tokens of the replacement candidates is 9,834, and the number of types is 6,420. All manually labeled words in the FCEpublic dataset were set as the gold target to train the GWE. For a missing word error, an error label is assigned to the word immediately after the missing word (see Table 4 (c)). To prevent overfitting, singleton words in the training data were taken as unknown words.",
"cite_spans": [],
"ref_spans": [
{
"start": 752,
"end": 759,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "Lang-8 corpus. Furthermore, we added the large-scale Lang-8 English learner corpus to the FCE-public dataset to train word embeddings (FCE+EWE-L8 and FCE+E&GWE-L8) to explore the effect of a large data on the proposed methods. We used a classifier trained using only the FCE-public dataset whose word embeddings were initialized with the large-scale pre-trained word embeddings to compare the results with those of a classifier trained directly using a noisy large-scale data whose word embeddings were initialized using word2vec (FCE&L8+W2V, see Table 2b ).",
"cite_spans": [],
"ref_spans": [
{
"start": 547,
"end": 555,
"text": "Table 2b",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "Lang-8 learner corpus has over 1 million manually annotated English sentences written by ESL learners. Extraction of error patterns from Lang-8 in the process of creating negative samples to train word embeddings was performed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "1. Extract word pairs using the dynamic programming from a correct sentence and an incorrect sentence. 2. If the learner's word of the extracted word pair is included in the vocabulary created from FCE-public, include it to the error patterns. In the Lang-8 dataset the number of types of target words of the replacement candidates is 10,372, the number of tokens of the replacement candidates is 272,561, and the number of types is 61,950.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "Our experiments on FCE+EWE-L8 and FCE+E&GWE-L8 were conducted by combining error patterns from all of Lang-8 corpus and the training part of FCE-public corpus to train word embeddings. However, since the number of error patterns of Lang-8 is larger than that of FCE-public, we normalized each frequency so that the ratio was 1:1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We use F 0.5 as the main evaluation measure, following a previous study .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "This measure was also adopted in the CoNLL-14 shared task on error correction task (Ng et al., 2014) . It combines both precision and recall, while assigning twice as much weight to precision because accurate feedback is often more important than coverage in error detection applications (Nagata and Nakatani, 2010) . Nagata and Nakatani (2010) presented a precision-oriented error detection system for articles and numbers that demonstrated precision of 0.72 and a recall of 0.25 and achieved a learning effect that is comparable to that of a human tutor.",
"cite_spans": [
{
"start": 83,
"end": 100,
"text": "(Ng et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 288,
"end": 315,
"text": "(Nagata and Nakatani, 2010)",
"ref_id": "BIBREF12"
},
{
"start": 318,
"end": 344,
"text": "Nagata and Nakatani (2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We set parameters for word embeddings according to the previous study . The dimension of the embedding layer of C&W, GWE, EWE and E&GWE is 300 and the dimension of the hidden layer is 200. We used a publicly released word2vec vectors (Chelba et al., 2013) trained on the News crawl from Google news 4 as pre-trained word embeddings. We set other parameters in our model by running a preliminary experiment in which the window size is 3, the number of negative samples is 600, the linear interpolation \u03b1 is 0.03, and the optimizer is the ADAM algorithm (Kingma and Ba, 2015) with the initial learning rate of 0.001. GWE is initialized randomly and EWE is initialized using pre-trained word2vec.",
"cite_spans": [
{
"start": 234,
"end": 255,
"text": "(Chelba et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 552,
"end": 573,
"text": "(Kingma and Ba, 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": "4.2"
},
{
"text": "We use EWE, GWE, and E&GWE word embeddings to initialize the Bi-LSTM neural network, and predict the correctness of the target word in the input sentence. We update initialized weights of embedding layer while training classifiers, since it showed better results. The parameters and settings of the network are the same as in a previous study . Specifically, in Bi-LSTM the dimensions of the embedding layer, the first hidden layer, and the second hidden layer are 300, 200, and 50, respectively. The Bi-LSTM model was optimized using the ADAM algorithm (Kingma and Ba, 2015) with an initial learning rate of 0.001, and a batch size of 64 sentences. Table 2a shows experimental results comparing Bi-LSTM models trained on FCE-public dataset initialized with two baselines (FCE+W2V and FCE+C&W) and the proposed word embeddings (FCE+EWE, FCE+GWE and FCE+E&GWE) in the error detection task. We used two models for FCE+W2V: FCE+W2V (R&Y 2016) is the experimental result reported in a previous study , and FCE+W2V (our reimplementation of (R&Y, 2016)) is the experimental result of our reimplementation of . FCE+E&GWE is a model combining FCE+EWE and FCE+GWE. We conducted Wilcoxon signed rank test (p \u2264 0.05) 5 times. Table 2b shows the result of using additional large-scale Lang-8 corpus. Compared to FCE&L8+W2V, FCE+EWE-L8 has better results within the three evaluation metrics. From this result, it can be seen that it is better to extract and use error patterns than simply using Lang-8 corpus as a training data to train a classifier, as it contains noise in the correct sentences. Furthermore, by combining with GWE method, accuracy was improved as in the above experiment.",
"cite_spans": [
{
"start": 554,
"end": 575,
"text": "(Kingma and Ba, 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 650,
"end": 658,
"text": "Table 2a",
"ref_id": "TABREF2"
},
{
"start": 1215,
"end": 1223,
"text": "Table 2b",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Classifier",
"sec_num": "4.3"
},
{
"text": "In terms of precision, recall, and F 0.5 , the methods in our study were ranked as FCE+E&GWE-L8 > FCE+EWE-L8 > FCE+E&GWE > FCE+GWE > FCE+EWE > FCE+W2V > FCE+C&W. Error patterns and grammaticality Table 3 : Numbers of correct instances for typical error types.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 203,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "consistently improved the accuracy of grammatical error detection, showing that the proposed methods are effective. Also, our proposed method has a statistically significant difference compared with previous research even without using largescale Lang-8 corpus. It outperformed the preceding state-of-the-art in all evaluation metrics. Table 3 shows the number of correct answers of each model for some typical errors. Error types are taken from the gold label of the FCE-public dataset. First, we analyze verb errors and missing articles, which have the largest differences between the numbers of correct answers of baselines and the proposed methods (see Table 3 (a) and (b)). The proposed methods gave more correct answers for verb errors, whereas FCE+W2V and FCE+C&W gave more correct answers for missing article errors. A possible explanation is that unigram-based error patterns are too powerful for word embeddings to learn other errors that can be learned from the contextual clues.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 343,
"text": "Table 3",
"ref_id": null
},
{
"start": 657,
"end": 664,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Second, we examine the difference made by adding the error patterns extracted from Lang-8 (see Table 3 (b) and (c)): FCE+EWE and FCE+EWE-L8 have the greatest difference in the number of correct answers in noun and noun type errors. FCE+EWE-L8 has more correct answers for noun errors such as suggestion and advice and noun type errors such as time and times. The reason is that Lang-8 includes a wide variety of lexical choice errors of nouns while FCE-public covers only a limited number of error variations. Table 4 demonstrates the examples of error detection of the baseline FCE+W2V and the best proposed method FCE+E&GWE-L8 on the test data. The bus will pick you up right at your hotel entrance. (a) FCE + W2V",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 3",
"ref_id": null
},
{
"start": 510,
"end": 517,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The bus will pick you up right at your hotel entery.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The bus will pick you up right at your hotel entery.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FCE + E&GWE-L8",
"sec_num": null
},
{
"text": "There are shops which sell clothes, food, and books (b) FCE + W2V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold",
"sec_num": null
},
{
"text": "There are shops which sales cloths, foods, and books FCE + E&GWE-L8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold",
"sec_num": null
},
{
"text": "There are shops which sales cloths, foods, and books Gold All the buses and the MTR have air-condition. (c) FCE + W2V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold",
"sec_num": null
},
{
"text": "All the buses and MTR have air-condition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold",
"sec_num": null
},
{
"text": "All the buses and MTR have air-condition. and as it can be seen, FCE+E&GWE-L8 detected the error in contrast to FCE+W2V. Noun type errors are presented in Table 4 (b). Here, FCE+W2V did not detect any error, while FCE+E&GWE-L8 could detect the mass noun error, frequently found in a learner corpus. Detection of \"sale\" and \"cloths\" was failed in both models, but they are hard to detect since the former requires syntactic information and the latter involves common knowledge. In Table 4 (c), FCE+W2V succeeded in detection of a missing article error, but FCE+E&GWE-L8 did not. Even though proposed word embeddings learn substitution errors effectively, they cannot properly learn insertion and deletion errors. It is our future work to extend word embeddings to include these types of errors and focus on contextual errors that are difficult to deal with the model, for example, missing articles. Figure 3 visualizes word embeddings (FCE+W2V and FCE+E&GWE-L8) of frequently occurring errors in learning data using t-SNE. We plot prepositions and some typical verbs 5 , where FCE+E&GWE-L8 showed better results compared to FCE+W2V. Proportional to the frequency of errors, the position of the word embeddings of FCE+E&GWE-L8 changes in comparison with that of FCE+W2V. For example, FCE+E&GWE-L8 learned the embeddings of high-frequency words such as was and could differently from FCE+W2V. On the other hand, low-frequency words such as under and walk were learned similarly. Also, almost all words shown in this figure move to the upper right. These visualization can be used to analyze errors made by learners. ",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 480,
"end": 487,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 898,
"end": 906,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "FCE + E&GWE-L8",
"sec_num": null
},
{
"text": "In this study, we proposed word embeddings that can improve grammatical error detection accuracy by considering grammaticality and error patterns. We achieved the state-of-the-art accuracy on the FCE-public dataset using a Bi-LSTM model initialized with the proposed word embeddings. The word embeddings trained on a learner corpus can distinguish between correct and incorrect phrase pairs. In addition, we conducted experiments using a large-scale Lang-8 corpus. As a result, we showed that it is better to extract error patterns from such a corpus to train word embeddings than simply add Lang-8 corpus as a training data to train a classifier. We analyzed the detection results for some typical error types and showed the characteristics of learned word representations. We hope that the learned word embeddings are general enough to be of use to help NLP applications to language learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The similarity of the phrase pairs was calculated based on the similarity of the mean vector of the word vectors.2 http://lang-8.com/ 3 https://github.com/kanekomasahiro/grammatical-errordetection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/mmihaltz/word2vec-GoogleNewsvectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This dataset includes modal verbs as verb errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Yangyang Xi of Lang-8, Inc. for kindly allowing us to use the Lang-8 learner corpus. We also thank the anonymous reviewers for their insightful comments. This work was partially supported by JSPS Grant-in-Aid for Young Scientists (B) Grant Number JP16K16117.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic text scoring using neural networks",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Alikaniotis",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "715--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neu- ral networks. In ACL. pages 715-725.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.3005"
]
},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adapting grammatical error correction based on the native language of writers with neural network joint models",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Duc",
"middle": [
"Tam"
],
"last": "Hoang",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1901--1911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt, Duc Tam Hoang, and Hwee Tou Ng. 2016a. Adapting grammatical error correction based on the native language of writers with neu- ral network joint models. In EMNLP. pages 1901- 1911.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural network translation models for grammatical error correction",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Kaveh",
"middle": [],
"last": "Taghipour",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.00189"
]
},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt, Kaveh Taghipour, and Hwee Tou Ng. 2016b. Neural network translation models for grammatical error correction. arXiv preprint arXiv:1606.00189 .",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML. pages 160-167.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493-2537.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional LSTM and other neural network architectures. Neu- ral Networks 18(5):602-610.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Detecting errors in English article usage by non-native speakers. Natural Language Engineering",
"authors": [
{
"first": "Na-Rae",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "115--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting errors in English article usage by non-native speakers. Natural Language Engineer- ing. pages 115-129.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Detecting learner errors in the choice of content words using compositional distributional semantics",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Kochmar",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2014,
"venue": "COL-ING",
"volume": "",
"issue": "",
"pages": "1740--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekaterina Kochmar and Ted Briscoe. 2014. Detect- ing learner errors in the choice of content words us- ing compositional distributional semantics. In COL- ING. pages 1740-1751.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SRL-based verb selection for ESL",
"authors": [
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Kuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Stephan",
"middle": [
"Hyeonjun"
],
"last": "Stiller",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2010,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1068--1076",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaohua Liu, Bo Han, Kuan Li, Stephan Hyeonjun Stiller, and Ming Zhou. 2010. SRL-based verb se- lection for ESL. In EMNLP. pages 1068-1076.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mining revision log of language learning SNS for automated Japanese error correction of second language learners",
"authors": [
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoya Mizumoto, Mamoru Komachi, Masaaki Na- gata, and Yuji Matsumoto. 2011. Mining revi- sion log of language learning SNS for automated Japanese error correction of second language learn- ers. In IJCNLP. pages 147-155.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Evaluating performance of grammatical error detection to maximize learning effect",
"authors": [
{
"first": "Ryo",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Kazuhide",
"middle": [],
"last": "Nakatani",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "894--900",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryo Nagata and Kazuhide Nakatani. 2010. Evaluating performance of grammatical error detection to max- imize learning effect. In COLING. pages 894-900.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The CoNLL-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "CoNLL Shared Task",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In CoNLL Shared Task. pages 1-14.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Compositional sequence labeling models for error detection in learner writing",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1181--1191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Helen Yannakoudakis. 2016. Composi- tional sequence labeling models for error detection in learner writing. In ACL. pages 1181-1191.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A learner corpus-based approach to verb suggestion for ESL",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sawai",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "708--713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sawai, Mamoru Komachi, and Yuji Matsumoto. 2013. A learner corpus-based approach to verb sug- gestion for ESL. In ACL. pages 708-713.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The ups and downs of preposition error detection in ESL writing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Joel",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 2008,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "865--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel R Tetreault and Martin Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In COLING. pages 865-872.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural language correction with character-based attention",
"authors": [
{
"first": "Ziang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Avati",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.09727"
]
},
"num": null,
"urls": [],
"raw_text": "Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Ju- rafsky, and Andrew Y Ng. 2016. Neural language correction with character-based attention. arXiv preprint arXiv:1603.09727 .",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A new dataset and method for automatically grading ESOL texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL-HLT",
"volume": "",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In ACL-HLT. pages 180-189.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Visualization of word embeddings by FCE+W2V and FCE+E&GWE-L8. The red color represents the word of FCE+W2V and the blue represents FCE+E&GWE-L8.",
"type_str": "figure",
"num": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"6\">: Results of grammatical error detection by Bi-LSTM. Asterisks indicate that there is a significant</td></tr><tr><td colspan=\"6\">difference for the confidence interval 0.95 for the P, R and F 0.5 against FCE + W2V (our reimplementa-</td></tr><tr><td colspan=\"2\">tion of (R&amp;Y, 2016)).</td><td/><td/><td/><td/></tr><tr><td/><td>Error type</td><td colspan=\"4\">Verb Missing-article Noun Noun type</td></tr><tr><td>(a)</td><td>FCE + W2V FCE + C&amp;W</td><td>56 53</td><td>48 46</td><td>26 24</td><td>9 7</td></tr><tr><td/><td>FCE + EWE</td><td>60</td><td>37</td><td>29</td><td>12</td></tr><tr><td colspan=\"2\">(b) FCE + GWE</td><td>62</td><td>43</td><td>29</td><td>11</td></tr><tr><td/><td>FCE + E&amp;GWE</td><td>64</td><td>40</td><td>31</td><td>14</td></tr><tr><td>(c)</td><td>FCE + EWE-L8 FCE + E&amp;GWE-L8</td><td>66 67</td><td>36 40</td><td>37 39</td><td>19 18</td></tr><tr><td/><td>Total number of errors</td><td>131</td><td>112</td><td>77</td><td>32</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "(a) shows an example of a noun error, Bi-LSTM + embeddings Detection result Gold",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Examples of error detection by FCE+W2V and FCE+E&GWE-L8. Gold corrections in italic, and detected errors in bold.",
"type_str": "table",
"html": null,
"num": null
}
}
}
}