{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:32:59.251551Z" }, "title": "Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision", "authors": [ { "first": "Wanyu", "middle": [], "last": "Du", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "" }, { "first": "Zae", "middle": [ "Myung" ], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Minnesota", "location": { "addrLine": "3 Grammarly" } }, "email": "" }, { "first": "Vipul", "middle": [], "last": "Raheja", "suffix": "", "affiliation": { "laboratory": "Text Revision System Source Doc Source Doc Edit Suggestions", "institution": "", "location": {} }, "email": "vipul.raheja@grammarly.com" }, { "first": "Dhruv", "middle": [], "last": "Kumar", "suffix": "", "affiliation": { "laboratory": "Text Revision System Source Doc Source Doc Edit Suggestions", "institution": "", "location": {} }, "email": "dhruv.kumar@grammarly.com" }, { "first": "Dongyeop", "middle": [], "last": "Kang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Minnesota", "location": { "addrLine": "3 Grammarly" } }, "email": "dongyeop@umn.edu" }, { "first": "Source", "middle": [], "last": "Doc", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "" }, { "first": "Revised", "middle": [], "last": "Doc", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Revision is an essential part of the human writing process. It tends to be strategic, adaptive, and, more importantly, iterative in nature. Despite the success of large language models on text revision tasks, they are limited to non-iterative, one-shot revisions. Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants. In this work, we present a human-inthe-loop iterative text revision system, Read, Revise, Repeat (R3), which aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions. In R3, a text revision model provides text editing suggestions for human writers, who can accept or reject the suggested edits. The accepted edits are then incorporated into the model for the next iteration of document revision. Writers can therefore revise documents iteratively by interacting with the system and simply accepting/rejecting its suggested edits until the text revision model stops making further revisions or reaches a predefined maximum number of revisions. Empirical experiments show that R3 can generate revisions with comparable acceptance rate to human writers at early revision depths, and the human-machine interaction can get higher quality revisions with fewer iterations and edits. The collected human-model interaction dataset and system code are available at https://github. com/vipulraheja/IteraTeR. Our system demonstration is available at https:// youtu.be/lK08tIpEoaE.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Revision is an essential part of the human writing process. It tends to be strategic, adaptive, and, more importantly, iterative in nature. Despite the success of large language models on text revision tasks, they are limited to non-iterative, one-shot revisions. Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants. In this work, we present a human-inthe-loop iterative text revision system, Read, Revise, Repeat (R3), which aims at achieving high quality text revisions with minimal human efforts by reading model-generated revisions and user feedbacks, revising documents, and repeating human-machine interactions. In R3, a text revision model provides text editing suggestions for human writers, who can accept or reject the suggested edits. The accepted edits are then incorporated into the model for the next iteration of document revision. Writers can therefore revise documents iteratively by interacting with the system and simply accepting/rejecting its suggested edits until the text revision model stops making further revisions or reaches a predefined maximum number of revisions. Empirical experiments show that R3 can generate revisions with comparable acceptance rate to human writers at early revision depths, and the human-machine interaction can get higher quality revisions with fewer iterations and edits. The collected human-model interaction dataset and system code are available at https://github. com/vipulraheja/IteraTeR. Our system demonstration is available at https:// youtu.be/lK08tIpEoaE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text revision is a crucial part of writing. Specifically, text revision involves identifying discrepan- cies between intended and instantiated text, deciding what edits to make, and how to make those desired edits (Flower and Hayes, 1981; Faigley and Witte, 1981; Fitzgerald, 1987) . It enables writers to deliberate over and organize their thoughts, find a better line of argument, learn afresh, and discover what was not known before (Sommers, 1980; Scardamalia, 1986) . Previous studies (Flower, 1980; Collins and Gentner, 1980; Vaughan and McDonald, 1986) have shown that text revision is an iterative process since human writers are unable to simultaneously comprehend multiple demands and constraints of the task when producing well-written texts -for instance, covering the content, following linguistic norms and discourse conventions of written prose, etc. Therefore, writers resort to performing text revisions on their drafts iteratively to reduce the number of considerations at each time.", "cite_spans": [ { "start": 214, "end": 238, "text": "(Flower and Hayes, 1981;", "ref_id": "BIBREF9" }, { "start": 239, "end": 263, "text": "Faigley and Witte, 1981;", "ref_id": "BIBREF4" }, { "start": 264, "end": 281, "text": "Fitzgerald, 1987)", "ref_id": "BIBREF7" }, { "start": 436, "end": 451, "text": "(Sommers, 1980;", "ref_id": "BIBREF19" }, { "start": 452, "end": 470, "text": "Scardamalia, 1986)", "ref_id": "BIBREF17" }, { "start": 490, "end": 504, "text": "(Flower, 1980;", "ref_id": "BIBREF8" }, { "start": 505, "end": 531, "text": "Collins and Gentner, 1980;", "ref_id": "BIBREF2" }, { "start": 532, "end": 559, "text": "Vaughan and McDonald, 1986)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Computational modeling of the iterative text revision process is essential for building intelligent and interactive writing assistants. Most prior works on the development of neural text revision systems Botha et al., 2018; Ito et al., 2019; Faltings et al., 2021) do not take the iterative nature of text revision and human feedback on suggested revisions into consideration. The direct application of such revision systems in an iterative way, however, could generate some \"noisy\" edits and require much burden on human writers to fix the noise. Therefore, we propose to collect human feedback at each iteration of revision to filter out those harmful noisy edits and produce revised documents of higher quality.", "cite_spans": [ { "start": 204, "end": 223, "text": "Botha et al., 2018;", "ref_id": "BIBREF0" }, { "start": 224, "end": 241, "text": "Ito et al., 2019;", "ref_id": "BIBREF11" }, { "start": 242, "end": 264, "text": "Faltings et al., 2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we present a novel human-in-theloop iterative text revision system, Read, Revise, Repeat (R3), which reads model-generated revisions and user feedbacks, revises documents, and repeats human-machine interactions in an iterative way, as depicted in Figure 1 . First, users write a document as input to the system or choose one from a candidate document set to edit. Then, the text revision system provides multiple editing suggestions with their edits and intents. Users can accept or reject the editing suggestions in an iterative way and stop revision when no editing suggestions are provided or the model reaches the maximum revision limit. The overall model performance can be estimated by calculating the acceptance rate throughout all editing suggestions.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 269, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "R3 provides numerous benefits over existing writing assistants for text revision. First, R3 improves the overall writing experience for writers by making it more interpretable, controllable, and productive: on the one hand, writers don't have to (re-)read the parts of the text that are already high quality, and this, in turn, helps them focus on larger writing goals ( \u00a74.2); on the other hand, by showing edit intentions for every suggested edit, which users can further decide to accept or reject, R3 provides them with more fine-grained control over the text revision process compared to other one-shot based text revision systems (Lee et al., 2022) , and are limited in both interpretability and controllability. Second, R3 improves the revision efficiency. The human-machine interaction can help the system produce higher quality revisions with fewer iterations and edits, and the empirical experiments in \u00a74.2 validate this claim. To the best of our knowledge, R3 is the first text revision system in literature that can perform iterative text revision in collaboration by human writers and revision models.", "cite_spans": [ { "start": 636, "end": 654, "text": "(Lee et al., 2022)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we make three major contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a novel human-in-the-loop text revision system R3 to make text revision models more accessible; and to make the process of iterative text revision efficient, productive, and cognitively less challenging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 From an HCI perspective, we conduct experiments to measure the effectiveness of the proposed system for the iterative text revision task. Empirical experiments show that R3 can generate edits with comparable acceptance rate to human writers at early revision depths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We analyze the data collected from humanmodel interactions for text revision and provide insights and future directions for building high-quality and efficient human-in-the-loop text revision systems. We release our code, revision interface, and collected human-model interaction dataset to promote future research on collaborative text revision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous works on modeling text revision Botha et al., 2018; Ito et al., 2019; Faltings et al., 2021) have ignored the iterative nature of the task, and simplified it into a one-shot \"original-to-final\" sentence-to-sentence generation task. However, in practice, at every revision step, multiple edits happen at the document-level which also play an important role in text revision. For instance, reordering and deleting sentences to improve the coherence. More importantly, performing multiple highquality edits at once is very challenging. Continuing the previous example, document readability can degrade after reordering sentences, and further adding transitional phrases is often required to make the document more coherent and readable. Therefore, one-shot sentence-to-sentence text revision formulation is not sufficient to deal with real-world challenges in text revision tasks.", "cite_spans": [ { "start": 41, "end": 60, "text": "Botha et al., 2018;", "ref_id": "BIBREF0" }, { "start": 61, "end": 78, "text": "Ito et al., 2019;", "ref_id": "BIBREF11" }, { "start": 79, "end": 101, "text": "Faltings et al., 2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "While some prior works on text revision (Coenen et al., 2021; Padmakumar and He, 2021; Gero et al., 2021; Lee et al., 2022) have proposed humanmachine collaborative writing interfaces, they are mostly focused on collecting human-machine interaction data for training better neural models, rather than understanding the iterative nature of the text revision process, or the model's ability to adjust editing suggestions according to human feedback.", "cite_spans": [ { "start": 40, "end": 61, "text": "(Coenen et al., 2021;", "ref_id": "BIBREF1" }, { "start": 62, "end": 86, "text": "Padmakumar and He, 2021;", "ref_id": null }, { "start": 87, "end": 105, "text": "Gero et al., 2021;", "ref_id": "BIBREF10" }, { "start": 106, "end": 123, "text": "Lee et al., 2022)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another line of work by Sun et al. (2021) ; Singh et al. (2022) on creative writing designed humanmachine interaction interfaces to encourage new content generation. However, text revision focuses on improving the quality of existing writing and keeping the original content as much as possible. In this work, we provide a human-in-the-loop text revision system to make helpful editing suggestions by interacting with users in an iterative way.", "cite_spans": [ { "start": 24, "end": 41, "text": "Sun et al. (2021)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 System Overview Figure 1 shows the general pipeline of R3 humanin-the-loop iterative text revision system. In this section, we will describe the development details of the text revision models and demonstrate our user interfaces.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We first formulate an iterative text revision process: given a source document 1 D t\u22121 , at each revision depth t, a text revision system will apply a set of edits to get the revised document D t . The system will continue iterating revision until the revised document D t satisfies a set of predefined stopping criteria, such as reaching a predefined maximum revision depth t max , or making no edits between D t\u22121 and D t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We follow the prior work of Du et al. (2022) to build our text revision system. The system is composed of edit intention identification models and a text revision generation model. We follow the same data collection procedure in Du et al. (2022) to collect the iterative revision data. 2 Then, we train the three models on the collected revision dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Revision System", "sec_num": "3.1" }, { "text": "Edit Intention Identification Models. Following Du et al. (2022), our edit intentions have four categories: FLUENCY, COHERENCE, CLARITY, and STYLE. We build our edit intention identification models at each sentence of the source document D t\u22121 to capture the more fine-grained edits. Specifically, given a source sentence, the system will make two-step predictions: (1) whether or not to edit, and (2) which edit intention to apply. The decision whether or not to edit is taken by an edit-prediction classifier that predicts a binary label of whether to edit a sentence or not. The second model, called the edit-intention classifier, predicts which edit intention to apply to the sentence. If the edit-prediction model predicts \"not to edit\" in the first step, the source sentence will be kept unchanged at the current revision depth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Revision System", "sec_num": "3.1" }, { "text": "Text Revision Generation Model. We fine-tune a large pre-trained language model like PEGA-SUS (Zhang et al., 2020) on our collected revision dataset to build the text revision generation model. Given a source sentence and its predicted edit intention, the model will generate a revised sentence, conditioned on the predicted edit intention. Then, we concatenate all un-revised and revised sentences to get the model-revised document D t , and extract all its edits using latexdiff 3 and difflib. 4 In summary, at each revision depth t, given a source document D t\u22121 , the text revision system first predicts the need for revising a sentence, and for the ones that need revision, it predicts the corresponding fine-grained edit intentions -thus, generating the revised document D t based on the source document and the predicted edit decisions and intentions.", "cite_spans": [ { "start": 94, "end": 114, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF24" }, { "start": 496, "end": 497, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Revision System", "sec_num": "3.1" }, { "text": "In practice, not all model-generated edits are equally impactful towards improving the document quality (Du et al., 2022) . Therefore, we enable user interaction in the iterative text revision process to achieve high quality of text revisions along with a productive writing experience. At each revision depth t, our system provides the user with suggested edits, and their corresponding edit intentions. The user can interact with the system by choosing to accept or reject the suggested edits. Figure 2 illustrates the details of R3's user interface. First, a user enters their id to login to the web interface as shown in Figure 2a . Then, the user is instructed with a few guidelines on how to operate the revision as demonstrated in Figure 2b . After getting familiar with the interface, the user can select a source document from the left dropdown menu in Figure 2c . By clicking the source document, all the edits predicted by the text re- vision model, as well as their corresponding edit intentions will show up in the main page as illustrated in Figure 2d (left panel). The user is guided to go through each suggested edits, and choose to accept or reject the current edit by clicking the Confirm button in Figure 2d (right panel). After going through all the suggested edits, the user is guided to click the Submit button to save their decisions on the edits. Then, the user is guided to click the Next Iteration! button to proceed to the next revision depth and check the next round of edits suggested by the system. This interactive process continues until the system does not generate further edits or reaches the maximum revision depth t max .", "cite_spans": [ { "start": 104, "end": 121, "text": "(Du et al., 2022)", "ref_id": null } ], "ref_spans": [ { "start": 496, "end": 504, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 625, "end": 634, "text": "Figure 2a", "ref_id": "FIGREF2" }, { "start": 738, "end": 747, "text": "Figure 2b", "ref_id": "FIGREF2" }, { "start": 862, "end": 871, "text": "Figure 2c", "ref_id": "FIGREF2" }, { "start": 1056, "end": 1065, "text": "Figure 2d", "ref_id": "FIGREF2" }, { "start": 1217, "end": 1226, "text": "Figure 2d", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Human-in-the-loop Revision", "sec_num": "3.2" }, { "text": "We conduct experiments to answer the following research questions: RQ1 How likely are users to accept the editing suggestions predicted by our text revision system? This question is designed to evaluate whether our text revision system can generate high quality edits. RQ2 Which types of edit intentions are more likely to be accepted by users? This question is aimed to identify which types of edits are more favored by users. RQ3 Does user feedback in R3 help produce higher quality of revised documents? This question is proposed to validate the effectiveness of human-in-the-loop component in R3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Iterative Revision Systems. We prepare three types of iterative revision systems to answer the above questions: 1. HUMAN-HUMAN: We ask users to accept or reject text revisions made by human writers, which are directly sampled from our collected iterative revision dataset. This serves as the baseline to measure the gap between our text revision system and human writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setups", "sec_num": "4.1" }, { "text": "We ask users to accept or reject text revisions made by our system. Then, we incorporate user accepted edits to the system to generate the next iteration of revision. This is the standard human-in-the-loop process of R3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYSTEM-HUMAN:", "sec_num": "2." }, { "text": "We conduct an ablation study by removing user interaction in reviewing the model-generated edits. Then, we compare the overall quality of final revised documents with and without the human-in-the-loop component. In both HUMAN-HUMAN and SYSTEM-HUMAN setups where users interacted with the system, they were not informed whether the revisions were sampled from our collected iterative revision dataset, or generated by the underlying text revision models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "User Study Design. We hired three linguistic experts (English L1, bachelor's or higher degree in Linguistics) to interact with our text revision system. Each user was presented with a text revision (as shown in Figure 2d ) and asked to accept or reject each edit in the current revision (users were informed which revision depth they were looking at). For a fair comparison, users were not informed about the source of the edits (human-written vs. model-generated), and the experiments were conducted separately one after the other. Note that the users were only asked to accept or reject edits, and they had control neither over the number of iterations, nor over the stopping criteria. The stopping criteria for the experiment were set by us and designed as: (1) no new edits were made at the following revision depth, or (2) the maximum revision depth t max = 3 was reached. Data Details. We followed the prior work (Du et al., 2022) to collect the text revision data across three domains: ArXiv, Wikipedia and Wikinews. This data was then used to train both the edit intention identification models and the text revision generation model. We split the data into training, validation and test set according to their document Table 1 : Statistics for our collected revision data which has been used to train the edit intention identification model and the text revision generation model. # Docs means the total number of unique documents, Avg. Depths indicates the average revision depth per document (for the human-generated training data), and # Edits stands for the total number of edits (sentence pairs) across the corpus.", "cite_spans": [ { "start": 919, "end": 936, "text": "(Du et al., 2022)", "ref_id": null } ], "ref_spans": [ { "start": 211, "end": 220, "text": "Figure 2d", "ref_id": "FIGREF2" }, { "start": 1228, "end": 1235, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "ids with a ratio of 8:1:1. The detailed data statistics are included in Table 1 . Note that our newly collected revision dataset is larger than the previously proposed dataset in Du et al. 2022with around 24K more unique documents and 170K more edits (sentence pairs). For the human evaluation data, we randomly sampled 10 documents with a maximum revision depth of 3 from each domain in the test set in Table 1. For the evaluation of text revisions made by human writers (HUMAN-HUMAN), we presented the existing ground-truth references from our collected dataset to users. Since we do not hire additional human writers to perform continuous revisions, we just presented the static human revisions from the original test set to users at each revision depth, and collected the user acceptance statistics as a baseline for our system.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "For the evaluation of text revisions made by our system (SYSTEM-HUMAN), we only presented the original source document at the initial revision depth (D 0 ) to our system, and let the system generate edits in the following revision depths, while incorporating the accept/reject decisions on modelgenerated edit suggestions by the users. Note that at each revision depth, the system will only incorporate the edits accepted by users and pass them to the next revision iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "For text revisions made by our system without human-in-the-loop (SYSTEM-ONLY), we let the system generate edits in an iterative way and accepted all model-generated edits at each revision depth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "Model Details. For both edit intention identification models, we fine-tuned the RoBERTa-large pre-trained checkpoint from Hugging-Face (Wolf et al., 2020) for 2 epochs with a learning rate of 1 \u00d7 10 \u22125 and batch size of 16. The edit-t # Docs Avg. Edits Avg. Accepts % Accepts # Docs Avg. Edits Avg. Accepts % Accepts Table 2 : Human-in-the-loop iterative text revision evaluation results. t stands for the revision depth, # Docs shows the total number of revised documents at the current revision depth, Avg. Edits indicates the average number of applied edits per document, Avg. Accepts means the average number of edits accepted by users per document, and % Accepts is calculated by dividing the total accepted edits with the total applied edits.", "cite_spans": [ { "start": 135, "end": 154, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 317, "end": 324, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "prediction classifier is binary classification model that predicts whether to edit a given sentence or not. It achieves an F1 score of 67.33 for the edit label and 79.67 for the not-edit label. The edit-intention classifier predicts the specific intent for a sentence that requires editing. It achieves F1 scores of 67.14, 70.27, 57.0, and 3.21 5 for CLARITY, FLUENCY, COHERENCE and STYLE intent labels respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYSTEM-ONLY:", "sec_num": "3." }, { "text": "For the text revision generation model, we finetuned the PEGASUS-LARGE (Zhang et al., 2020) pre-trained checkpoint from HuggingFace. We set the edit intentions as new special tokens (e.g.,