{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:33:08.890098Z" }, "title": "LinggleWrite: a Coaching System for Essay Writing", "authors": [ { "first": "Chung-Ting", "middle": [], "last": "Tsai", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jhih-Jie", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Ching-Yu", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents LinggleWrite, a writing coach that provides writing suggestions, assesses writing proficiency levels, detects grammatical errors, and offers corrective feedback in response to user's essay. The method involves extracting grammar patterns, training models for automated essay scoring (AES) and grammatical error detection (GED), and finally retrieving plausible corrections from a n-gram search engine. Experiments on public test sets indicate that both AES and GED models achieve state-of-the-art performance. These results show that LinggleWrite is potentially useful in helping learners improve their writing skills.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper presents LinggleWrite, a writing coach that provides writing suggestions, assesses writing proficiency levels, detects grammatical errors, and offers corrective feedback in response to user's essay. The method involves extracting grammar patterns, training models for automated essay scoring (AES) and grammatical error detection (GED), and finally retrieving plausible corrections from a n-gram search engine. Experiments on public test sets indicate that both AES and GED models achieve state-of-the-art performance. These results show that LinggleWrite is potentially useful in helping learners improve their writing skills.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Essay writing has been an essential part of language assessments (e.g., TOEFL, IELTS) but a challenging task for most students. To write a good essay not only requires sustained practice, but also demands instructional feedback from teachers. However, pressed with teaching load, teachers can only provide limited corrective feedback on students' essays. This has encouraged the development of computer-assisted writing systems to meet growing needs of automated feedback as a means of writing coaching. Computer Assisted Language Learning (CALL) has been an active field of computational linguistics and pedagogy. Some existing computer aided writing systems detect and correct grammatical errors, and give an overall score (e.g., Grammarly (www.grammarly.com) and Pigai (www.pigai.org)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Instead of directly correcting users' essays, Write&Improve (writeandimprove.com) only marks highly-likely incorrect words on the grounds that automated grammatical error correction is still very imprecise. Recently, researchers have begun to apply neural network models to both automated essay scoring (AES) and grammatical error detection (GED), gaining significant improvement (e.g., Dong et al. (2017) ; Rei and S\u00f8gaard (2018) ). However, these Web services fall short of providing sufficient \"coaching\" information (e.g., grammar patterns, collocations, examples) to learners to improve their writing skills.", "cite_spans": [ { "start": 387, "end": 405, "text": "Dong et al. (2017)", "ref_id": "BIBREF7" }, { "start": 408, "end": 430, "text": "Rei and S\u00f8gaard (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Provide writing suggestions as a user types away or during editing is another emerging approach to coaching the learner. For example, WriteAhead (writeahead.nlpweb.org) provides contextsensitive suggestions, right in the process of writing or self-editing. Google recently released Smart Compose that offers users word or phrase completion suggestions while writing an email (Chen et al., 2019) .", "cite_spans": [ { "start": 375, "end": 394, "text": "(Chen et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In line with these systems, we also suggest that feedback on learners' writings could be more effective if a system not only acts as an editor providing direct corrections, but also a coach performing grammatical error detection and offering interactive suggestions (Hearst, 2015) . Moreover, illustrating word usage with bilingual examples can better help non-native English learners. This would enhance learners' skills of self-editing and pave the way to lifelong language learning.", "cite_spans": [ { "start": 266, "end": 280, "text": "(Hearst, 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With that in mind, we developed a web-based system LinggleWrite (f.linggle.com) with many assistive writing functions. With LinggleWrite users can write or paste their essays and get informative feedback including just-in-time writing suggestions, essay scoring, error detection, and related word usage information retrieved from Linggle(linggle.com).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system consists of 4 components: (1) Interactive Writing Suggestion, (2) Essay Scoring, (3) Grammatical Error Detection, and (4) Corrective Feedback. The first component, Writing suggestion, will help users with word usage information while writing. The other three components are aimed at providing evaluation and constructive feedback after a user finishes writing. The system is available at f.linggle.com. We'll describe each component as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The LinggleWrite System", "sec_num": "2" }, { "text": "When a user begins to write an essay, the system responds with prompts of related grammar patterns, collocations, and bilingual examples. These continuous writing suggestions are based on the last word or phrase the user has entered. Additionally, the user can get information of a certain word by mousing over it. For example, suggestions for \"finish\" are shown in Section A of Figure 1 (bottom left). Once finishing the writings, the user can click the Check button triggering the following components.", "cite_spans": [], "ref_spans": [ { "start": 379, "end": 387, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Interactive Writing Suggestion", "sec_num": "2.1" }, { "text": "After accepting an essay longer than 30 words, LinggleWrite assesses user's writing proficiency. The assessment is provided in the form of CEFR Levels 1 (A1-C2) as shown in Section B of Figure 1 (top right).", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Essay Scoring", "sec_num": "2.2" }, { "text": "1 https://www.coe.int/en/web/common-europeanframework-reference-languages/level-descriptions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Essay Scoring", "sec_num": "2.2" }, { "text": "LinggleWrite tries to detect potential grammatical errors in each sentence. Sentences with potential errors are marked with yellow (1 possible error) or orange (2 or more possible errors) background, as shown in Section C of Figure 1 (center right). The user can click on an erroneous sentence to demand GED results. LinggleWrite marks suspicious words with orange, red or green, suggesting to insert a word, delete the word, or replace the word respectively, as shown in Section C of Figure 1 (center right). Subsequently, the user can click on an error to display plausible corrective suggestions returned by a n-gram search engine.", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 233, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 485, "end": 493, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Grammatical Error Detection", "sec_num": "2.3" }, { "text": "We present corrective suggestions according to the context and the edit type (i.e., insertion, deletion, replacement), using an existing linguistic search engine, Linggle (Boisson et al., 2013) . An example of corrective suggestions for the sentence \"I finished school on June\" is shown in Section E in Figure 1 (bottom right). LinggleWrite detects \"on\" probably requiring a replacement edit. We convert the detected error into a Linggle query to search for more appropriate expressions, and provide the user with the search result \"school in June' for considerations.", "cite_spans": [ { "start": 171, "end": 193, "text": "(Boisson et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 303, "end": 311, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Corrective Feedback", "sec_num": "2.4" }, { "text": "To develop LinggleWrite, we extract the most common grammar patterns from a corpus in order to provide writing suggestions. Additionally, we develop models for AES and GED based on annotated learner corpora. We retrieve corrective feedback by querying a linguistic search engine according to the predicted edit type of an error. We describe the process in detail in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "We extract grammatical patterns, collocations and bilingual examples for keywords from a given corpus to provide writing suggestions in the interactive writing session. Our extraction process includes four steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Grammar Patterns", "sec_num": "3.1" }, { "text": "In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Grammar Patterns", "sec_num": "3.1" }, { "text": "Step 1, we build a dictionary of grammar patterns of verbs, nouns and adjectives based on Francis et al. (1996) . For example, the grammar patterns of the word play are V n, V n in n, etc.", "cite_spans": [ { "start": 90, "end": 111, "text": "Francis et al. (1996)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Extracting Grammar Patterns", "sec_num": "3.1" }, { "text": "In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Grammar Patterns", "sec_num": "3.1" }, { "text": "Step 2, we parse sentences from Corpus of Contemporary American English (COCA) and Cambridge online dictionary (CAM) using a dependency parser to extract grammar patterns and collocations based on the templates in Step (1). For example, the extracted grammar pattern and collocation from the sentence \"Schools play an important role in society\" are \"V n in n\" and \"society\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Grammar Patterns", "sec_num": "3.1" }, { "text": "In Step (3), for each keyword, we count and filter out patterns and collocations based on mean and standard deviation. Finally, we use GDEX method (Kilgarriff et al., 2008) to select the best monolingual and bilingual examples from COCA and CAM for each pattern.", "cite_spans": [ { "start": 147, "end": 172, "text": "(Kilgarriff et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Extracting Grammar Patterns", "sec_num": "3.1" }, { "text": "We formulate AES as a regression problem and train a neural model for this task. We investigate two neural network architectures with different input formats: word-based models and sentencebased models, which learn essay representation based on word sequences and sentence sequences respectively. We build our word-based models upon CNN, LSTM and Bi-LSTM (Taghipour and Ng, 2016) , while sentence-level models upon the LSTM-LSTM and LSTM-CNN framework (Dong et al., 2017) . Moreover, we further extend both sentence-based models and word-based models by adding the attention mechanism after the neural layer, attempting to select the sentences or words to focus on for effective scoring. Our models are similar to other sentence-based and word-based neural AES model (e.g., Taghipour and Ng (2016) ; Dong et al. (2017) ), but we use a different training set, EFCAMDAT (Geertzen et al., 2013) and output format, CEFR levels, to train our model.", "cite_spans": [ { "start": 355, "end": 379, "text": "(Taghipour and Ng, 2016)", "ref_id": "BIBREF18" }, { "start": 452, "end": 471, "text": "(Dong et al., 2017)", "ref_id": "BIBREF7" }, { "start": 774, "end": 797, "text": "Taghipour and Ng (2016)", "ref_id": "BIBREF18" }, { "start": 800, "end": 818, "text": "Dong et al. (2017)", "ref_id": "BIBREF7" }, { "start": 868, "end": 891, "text": "(Geertzen et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Scoring an Essay", "sec_num": "3.2" }, { "text": "We formulate GED as a sequence labeling problem and develop a neural sequence labeling model to deal with the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting Grammatical Errors", "sec_num": "3.3" }, { "text": "An existing GED method proposed by Rei and Yannakoudakis (2016) takes tokens as input and predicts whether each token is correct in the sentence as output. We extend their model by changing the binary error tag schema (Incorrect and Correct) into a more informative DIRC tag schema (Delete, Insert, Replace, and Correct), with the goal of providing learners more specific suggestions (i.e., the edit type of an error) to revise their essay. We train a GED model based on Bi-LSTM with a Conditional Random Field layer (CRF). To improve the GED model, we add Bidirectional Encoder Representations from Transformers (BERT), which significantly outperforms other embedding schemes in many tasks (Devlin et al., 2018) . In addition, we also add a character-based word embedding, Flair, which captures more contextual information (Akbik et al., 2018). Our training process is divided into two steps.", "cite_spans": [ { "start": 35, "end": 63, "text": "Rei and Yannakoudakis (2016)", "ref_id": "BIBREF16" }, { "start": 691, "end": 712, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Detecting Grammatical Errors", "sec_num": "3.3" }, { "text": "In Step (1), we convert sentences with error annotations into unedited sentences and DIRC tags (i.e., <[-,-]> for Delete, tokens preceded by <{+,+}> for Insert, <[-,-]{+,+}> for Replace and tokens with no edit tag for Correct). For example, the sentence \"I believe there are {+a+} lot of [-why-] {+ways+} enjoy [-the-] shopping.\" is converted to \"I believe there are lot of why enjoy the shopping .\" and \"\". These two sequences are treated as the input and output of a neural GED model respectively. Note that the token to be inserted ({+a+}) is not in the unedited sentence, and the right token lot is labeled I instead.", "cite_spans": [ { "start": 288, "end": 295, "text": "[-why-]", "ref_id": null }, { "start": 311, "end": 318, "text": "[-the-]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Detecting Grammatical Errors", "sec_num": "3.3" }, { "text": "In Step (2), we train a neural GED model for a grammatical error detector using a BiLSTM-CRF architecture. We first combine BERT embeddings (Devlin et al., 2018) with Flair embeddings (Akbik et al., 2018) to form word embeddings and then encode each token in a given sentence into a fixedlength vector. Finally, these embeddings are fed into BiLSTM-CRF network to compute and output a DIRC label sequence.", "cite_spans": [ { "start": 140, "end": 161, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 184, "end": 204, "text": "(Akbik et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Detecting Grammatical Errors", "sec_num": "3.3" }, { "text": "To retrieve writing suggestions for detected errors, we design queries for each edit type to search for more plausible corrections using Linggle, a linguistic search engine on a web-based dataset of one trillion words (Boisson et al., 2013) .", "cite_spans": [ { "start": 218, "end": 240, "text": "(Boisson et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Retrieving Suggestions for Detected Errors", "sec_num": "3.4" }, { "text": "Linggle has different query functions and operators to search word usage in context as shown in Figure 1 . These query functions enable the system to query zero, one or multiple words. For example, \"play * role\" is intended to search for a maximum span of three intervening words. We use three operators (\"?\", \"*\", \" \") to retrieve corrective suggestions for the three edit types, as described below.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Retrieving Suggestions for Detected Errors", "sec_num": "3.4" }, { "text": "Deletion edit: We use the \"?\" operator before a word tagged with \"D\" to search for n-grams with or without the word in question. For example, receiving the sentence \"We discuss about this issue.\" as input, our GED model outputs the sequence \"C C D C C C\". Then, we generate the query \"discuss ?about this issue\" to search Linggle for corrective suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieving Suggestions for Detected Errors", "sec_num": "3.4" }, { "text": "Insertion edit: We use the \"*\" operator before a word tagged with \"I\" to search for ngrams with additional words around this word. For example, an insertion edit on \"this\" is detected in the sentence \"I am good this sport.\" (the GED model output \"C C C I C\"), and thus a Linggle query are formulated as \"good * this\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieving Suggestions for Detected Errors", "sec_num": "3.4" }, { "text": "Replacement edit: A word tagged with \"R\" indicates replacement required. We first check if the word is misspelled using enchant 2 library. If misspelled, we replace the word with candidates by enchant (e.g., 'moey' \u2192 'money/mopey/mosey'). If not, we use the \" \" operator to search for alternative n-grams. For example, the GED output of the sentence \"The driver did not accept me to get on the bus.\" would be \"C C C C R C C C C C C C C\". Thus, we use the query \"not me to\" to search for replacement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieving Suggestions for Detected Errors", "sec_num": "3.4" }, { "text": "We used the EF-Cambridge Open Language Database (EFCAMDAT) (Geertzen et al., 2013) to train our AES model. This dataset contains about 1.2 million essays with over 83 million words written by approximately 174,000 learners with a wide range of CEFR levels (A1-C2) (language proficiency level). We used the student essays as input and the CEFR level assigned by a grader as output to train the AES model. Due to the imbalanced distribution of levels as shown in To train the GED model, we use the First Certificate in English dataset (FCE). This dataset contains 1,224 essays written by English learners who took the First Certificate in English (FCE) exam. These essays have been manually tagged based on 77 error types (Yannakoudakis et al., 2011) . We used 30,953 sentences from FCE for training, 2,720 for testing, and 2,222 for development. We followed the approach of Rei and Yannakoudakis (2016) in our experiment, but converted the dataset into DIRC format as described in Section 3.3. Table 3 : Evaluation on FCE-public test set in DIRC task and binary task", "cite_spans": [ { "start": 59, "end": 82, "text": "(Geertzen et al., 2013)", "ref_id": "BIBREF10" }, { "start": 720, "end": 748, "text": "(Yannakoudakis et al., 2011)", "ref_id": "BIBREF20" }, { "start": 873, "end": 901, "text": "Rei and Yannakoudakis (2016)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 993, "end": 1000, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments 4.1 Datasets", "sec_num": "4" }, { "text": "For the AES model, we optimized the trained model using RMSProp (Dauphin et al., 2015) optimizer with learning rate 0.001 and the maximum gradient norm was set to 0.9. We used pre-trained 100-dimensional GloVe vectors (Pennington et al., 2014) as input. The hidden layer size of LSTM and Bi-LSTM was set to 100. For CNN models, we used a window size of 5 and hidden layer size of 100. We applied dropout on the neural network layer to avoid overfitting, with dropout probabilities set to 0.2. The batch size was 32 and each model was trained for 50 epochs. For the GED model, we set parameters different from previous work (Rei and Yannakoudakis, 2016) .", "cite_spans": [ { "start": 56, "end": 86, "text": "RMSProp (Dauphin et al., 2015)", "ref_id": null }, { "start": 218, "end": 243, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF14" }, { "start": 623, "end": 652, "text": "(Rei and Yannakoudakis, 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters", "sec_num": "4.2" }, { "text": "We use the publicly available pretrained word embeddings GoogleNews word vectors (word2vec) , Flair (Akbik et al., 2018) , and BERT 3 (Devlin et al., 2018) to represent words. Flair embeddings were trained on the 1-billion word corpus used in Chelba et al. (2013) and the embedding size (both forward and backward) was 2048. As for BERT, we utilized bert-base-uncased model which is trained on the English Wikipedia (2.5G words) and BooksCorpus (0.8G words). We employed 2-layer Bi-LSTM with CRF to develop for GED model and set the hidden layer size of Bi-LSTM to 256. We used SGD optimizer with learning rate 0.01, with maximum gradient norm set to 1. We applied dropout on both embedding and Bi-LSTM layers with dropout probabilities 0.5. We trained the network for 150 epochs and selected the best model with the highest F1 score on the development set.", "cite_spans": [ { "start": 100, "end": 120, "text": "(Akbik et al., 2018)", "ref_id": "BIBREF0" }, { "start": 134, "end": 155, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 243, "end": 263, "text": "Chelba et al. (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Hyperparameters", "sec_num": "4.2" }, { "text": "For the AES task, we adopted quadratic weighted Kappa (QWK) as our evaluation metric, which was used in Automated Student Assessment Prize (ASAP) competition and several AES researches (Taghipour and Ng, 2016; Vaswani et al., 2017; Dong et al., 2017) . For the GED task, we follow the previous research by Rei and Yannakoudakis (2016) and use precision, recall and F 0.5 to evaluate our GED model. Table 3 presents the results of different GED models on the FCE testset with binary and DIRC format to compare our results with the state-of-theart method proposed by Rei and S\u00f8gaard (2018) using the binary schema. Table 3 shows that BiLSTM-CRF+BERT+Flair performs substantially better than the other GED models and achieve state-of-the-art performance on the FCE test set. Interestingly, we note that the model with word2vec pre-trained word embeddings achieves the highest precision but the lowest recall. As for the DIRC schema, BiLSTM-CRF+BERT+Flair performs the best among all models. Importantly, the DIRC model performs comparably to the binary model while providing more informative feedback (i.e., the edit type) for learners to self-edit their essays. It is also worth noting that for GED and GEC tasks multiple answers are acceptable and there is low inter-annotator agreement (Rozovskaya and Roth, 2010) . Bryant and Ng (2015) pointed out even human annotators can only achieve 72.8 F 0.5 score at the best against the gold standard annotations of multiple annotators in GEC tasks. Thus, it is fair to say that the performance of our model against one gold standard annotation are underestimated and not far from human annotators, thus acceptable for an application. Table 4 shows results of different network architectures on the AES task. As we can see in Table 4 , LSTN-LSTM-ATT achieves the best performance among all models. In addition, we find that sentence-level models perform better than wordlevel ones in general. Furthermore, we also observe that the model with attention mechanism per- Table 4 : Average QWK scores on EFCAMDAT forms slightly better than the other without attention mechanism. Besides, the result (i.e., QWK score 0.957) shows our neural models are efficient to predict scores in EFCAMDAT, comparing with other datasets as Automated Student Assessment Prize 4 (ASAP). Trained on ASAP, the character-based model with CNN-LSTM proposed by Taghipour and Ng (2016) scores QWK 0.761, and the sentence-based model with LSTM-CNNatt proposed by Taghipour and Ng (2016) achieves QWK score 0.764.", "cite_spans": [ { "start": 185, "end": 209, "text": "(Taghipour and Ng, 2016;", "ref_id": "BIBREF18" }, { "start": 210, "end": 231, "text": "Vaswani et al., 2017;", "ref_id": "BIBREF19" }, { "start": 232, "end": 250, "text": "Dong et al., 2017)", "ref_id": "BIBREF7" }, { "start": 306, "end": 334, "text": "Rei and Yannakoudakis (2016)", "ref_id": "BIBREF16" }, { "start": 565, "end": 587, "text": "Rei and S\u00f8gaard (2018)", "ref_id": "BIBREF15" }, { "start": 1286, "end": 1313, "text": "(Rozovskaya and Roth, 2010)", "ref_id": "BIBREF17" }, { "start": 1316, "end": 1336, "text": "Bryant and Ng (2015)", "ref_id": "BIBREF2" }, { "start": 2376, "end": 2399, "text": "Taghipour and Ng (2016)", "ref_id": "BIBREF18" }, { "start": 2476, "end": 2499, "text": "Taghipour and Ng (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 398, "end": 405, "text": "Table 3", "ref_id": null }, { "start": 613, "end": 620, "text": "Table 3", "ref_id": null }, { "start": 1677, "end": 1684, "text": "Table 4", "ref_id": null }, { "start": 1768, "end": 1775, "text": "Table 4", "ref_id": null }, { "start": 2009, "end": 2016, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "In summary, we have presented an writing environment that supports interactive writing suggestions, scoring, error detection and corrective feedback. For the interactive writing task, we provide grammatical suggestions, collocations, and bilingual examples, to guide the user towards writing fluently. For the GED task, we proposed a new label schema, DIRC. Experiments show that the proposed label schema achieves comparable performance (on binary task) while providing more informative feedback. In addition, we leverage an existing linguistic search engine to provide corrective suggestions for each error type. Many avenues exist for future research and improvement of our system. For example, the method for introducing additional training data or generating artificial training data could be implemented to improve the performance. An interesting direction to explore is re-ranking corrective suggestions, so that the suggestion more relevant to the original sentence goes to the top. Yet another direction of research would be to detect fine-grained error types. Finally, our system currently providing additional Chinese translations for English examples. Obviously we could easily provide languages trans-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://github.com/AbiWord/enchant", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/google-research/bert#pre-trainedmodels", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.kaggle.com/c/asap-aes lations by changing a bilingual dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by Ministry of Science and Technology, Taiwan under Grant No. 109-2639-M-007-001-, No. 109-2634 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649. Association for Computational Linguis- tics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Linggle: a webscale linguistic search engine for words in context", "authors": [ { "first": "Joanne", "middle": [], "last": "Boisson", "suffix": "" }, { "first": "Ting-Hui", "middle": [], "last": "Kao", "suffix": "" }, { "first": "Jian-Cheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Tzu-Hsi", "middle": [], "last": "Yen", "suffix": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "139--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joanne Boisson, Ting-Hui Kao, Jian-Cheng Wu, Tzu- Hsi Yen, and Jason S Chang. 2013. Linggle: a web- scale linguistic search engine for words in context. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 139-144.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "How far are we from fully automatic high quality grammatical error correction?", "authors": [ { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "697--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Bryant and Hwee Tou Ng. 2015. How far are we from fully automatic high quality grammati- cal error correction? In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 697-707.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "One billion word benchmark for measuring progress in statistical language modeling", "authors": [ { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Ge", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. Tech- nical report, Google.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gmail smart compose: Real-time assisted writing", "authors": [ { "first": "Mia", "middle": [], "last": "Xu Chen", "suffix": "" }, { "first": "N", "middle": [], "last": "Benjamin", "suffix": "" }, { "first": "Gagan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Shuyuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jackie", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yinan", "middle": [], "last": "Tsay", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" }, { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Dai", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "2287--2295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mia Xu Chen, Benjamin N Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang, Justin Lu, Jackie Tsay, Yinan Wang, Andrew M Dai, Zhifeng Chen, et al. 2019. Gmail smart compose: Real-time assisted writing. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, pages 2287-2295.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Equilibrated adaptive learning rates for nonconvex optimization", "authors": [ { "first": "N", "middle": [], "last": "Yann", "suffix": "" }, { "first": "", "middle": [], "last": "Dauphin", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Harm De Vries", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", "volume": "1", "issue": "", "pages": "1504--1512", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. 2015. Equilibrated adaptive learning rates for non- convex optimization. In Proceedings of the 28th In- ternational Conference on Neural Information Pro- cessing Systems -Volume 1, NIPS'15, pages 1504- 1512, Cambridge, MA, USA. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Attentionbased recurrent convolutional neural network for automatic essay scoring", "authors": [ { "first": "Fei", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/K17-1017" ] }, "num": null, "urls": [], "raw_text": "Fei Dong, Yue Zhang, and Jie Yang. 2017. Attention- based recurrent convolutional neural network for au- tomatic essay scoring. In Proceedings of the 21st", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Conference on Computational Natural Language Learning", "authors": [], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "153--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Computational Natural Language Learning (CoNLL 2017), pages 153-162. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Grammar patterns 1: verbs", "authors": [ { "first": "Gill", "middle": [], "last": "Francis", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Hunston", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Manning", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gill Francis, Susan Hunston, and Elizabeth Manning. 1996. Grammar patterns 1: verbs. NY: Harper- Collins Publication.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic linguistic annotation of large scale l2 databases: The ef-cambridge open language database", "authors": [ { "first": "Jeroen", "middle": [], "last": "Geertzen", "suffix": "" }, { "first": "Theodora", "middle": [], "last": "Alexopoulou", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of SLRF", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeroen Geertzen, Theodora Alexopoulou, and Anna Ko- rhonen. 2013. Automatic linguistic annotation of large scale l2 databases: The ef-cambridge open lan- guage database. In Proceedings of SLRF 2012.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Can natural language processing become natural language coaching?", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1245--1252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A Hearst. 2015. Can natural language processing become natural language coaching? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1245- 1252.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Gdex: Automatically finding good dictionary examples in a corpus", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Milos", "middle": [], "last": "Hus\u00e1k", "suffix": "" }, { "first": "Katy", "middle": [], "last": "Mcadam", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rundell", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Rychl\u1ef3", "suffix": "" } ], "year": 2008, "venue": "Proc. Euralex", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff, Milos Hus\u00e1k, Katy McAdam, Michael Rundell, and Pavel Rychl\u1ef3. 2008. Gdex: Automatically finding good dictionary examples in a corpus. In Proc. Euralex.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In In EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Jointly learning to label sentences and tokens", "authors": [ { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marek Rei and Anders S\u00f8gaard. 2018. Jointly learning to label sentences and tokens. CoRR, abs/1811.05949.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Compositional sequence labeling models for error detection in learner writing", "authors": [ { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1181--1191", "other_ids": { "DOI": [ "10.18653/v1/P16-1112" ] }, "num": null, "urls": [], "raw_text": "Marek Rei and Helen Yannakoudakis. 2016. Composi- tional sequence labeling models for error detection in learner writing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1181- 1191. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Annotating esl errors: Challenges and rewards", "authors": [ { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 fifth workshop on innovative use of NLP for building educational applications", "volume": "", "issue": "", "pages": "28--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alla Rozovskaya and Dan Roth. 2010. Annotating esl errors: Challenges and rewards. In Proceedings of the NAACL HLT 2010 fifth workshop on innova- tive use of NLP for building educational applica- tions, pages 28-36. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A neural approach to automated essay scoring", "authors": [ { "first": "Kaveh", "middle": [], "last": "Taghipour", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1882--1891", "other_ids": { "DOI": [ "10.18653/v1/D16-1193" ] }, "num": null, "urls": [], "raw_text": "Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1882-1891. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A new dataset and method for automatically grading esol texts", "authors": [ { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Medlock", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "180--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 180-189. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "The screenshot of the system LinggleWrite", "uris": null }, "TABREF1": { "type_str": "table", "content": "", "text": "Query operator instruction", "num": null, "html": null }, "TABREF2": { "type_str": "table", "content": "
, we ran-
", "text": "", "num": null, "html": null }, "TABREF3": { "type_str": "table", "content": "", "text": "Description of the EFCAMDAT dataset", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
Binary TaskDIRC Task
ModelIncorrect tagInsertion tagReplacement tagDeletion tag
Prec. Rei and S\u00f8gaard (2018) 65.5 28.6 52
BiLSTM-CRF + word2vec8913.8 42.6 57.2 12.1 32.9 82.9 22.4 53.9 67.63.113.2
BiLSTM-CRF + Flair68.9 24.6 50.7 53.8 20.2 40.4 72.8 28.3 55.4 59.6 10.1 30.17
BiLSTM-CRF + BERT71.1 35.7 59.4 53.2 23.8 42.7 73.1 36.1 60.7 53.9 24.1 43.2
BiLSTM-CRF + BERT +Flair 72.3 36.7 60.6 54.6 25.3 44.3 73.5 40.6 63.35924.9 46.3
", "text": "Rec. F 0.5 Prec. Rec. F 0.5 Prec. Rec. F 0.5 Prec. Rec. F 0.5", "num": null, "html": null } } } }