{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:25.224718Z" }, "title": "A Visualization Approach for Rapid Labeling of Clinical Notes for Smoking Status Extraction", "authors": [ { "first": "Saman", "middle": [], "last": "Enayati", "suffix": "", "affiliation": {}, "email": "saman.enayati@temple.edu" }, { "first": "Ziyu", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Benjamin", "middle": [], "last": "Lu", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Labeling is typically the most human-intensive step during the development of supervised learning models. In this paper, we propose a simple and easy-to-implement visualization approach that reduces cognitive load and increases the speed of text labeling. The approach is fine-tuned for task of extraction of patient smoking status from clinical notes. The proposed approach consists of the ordering of sentences that mention smoking, centering them at smoking tokens, and annotating to enhance informative parts of the text. Our experiments on clinical notes from the MIMIC-III clinical database demonstrate that our visualization approach enables human annotators to label sentences up to 3 times faster than with a baseline approach.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Labeling is typically the most human-intensive step during the development of supervised learning models. In this paper, we propose a simple and easy-to-implement visualization approach that reduces cognitive load and increases the speed of text labeling. The approach is fine-tuned for task of extraction of patient smoking status from clinical notes. The proposed approach consists of the ordering of sentences that mention smoking, centering them at smoking tokens, and annotating to enhance informative parts of the text. Our experiments on clinical notes from the MIMIC-III clinical database demonstrate that our visualization approach enables human annotators to label sentences up to 3 times faster than with a baseline approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Deep learning algorithms achieve state-of-the-art accuracy on a range of natural language processing tasks. However, to achieve high accuracy, deep learning algorithms typically require a lot of labeled data. In extremely error-sensitive applications, such as those in the medical domain, the trade-off between labeling effort and prediction accuracy is strongly skewed towards maximizing the accuracy. In such applications, data labeling arises as the most costly and human-intensive step during the development of deep learning models. In this paper, we focus on a scenario where the requirement is to label all available data because the goal is to maximize the accuracy using the available corpus of documents. In such a scenario, none of the labeling shortcuts developed in the machine learning community such as active learning are of much help on their own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our focus is on presenting textual information to human annotators in a way that minimizes their cognitive load, thus improving their focus, and maximizes their labeling speed, thus reducing the cost of labeling. Our proposed visualization approach is fine-tuned to enable text labeling in the specific application where the objective is to extract information about smoking status of patients from their medical notes. Smoking status of patients is critical information in many practical applications, ranging from recruiting participants in clinical trials to determining medical and life insurance premiums for prospective customers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Smoking status extraction is a specific instance of information extraction problems. Our visualization approach relies on several key observations about this particular type of problem. We first observed that smoking status could typically be extracted from sentences that contain one of the smoking keywords such as smoke, smoking, tobacco, nicotine. Thus, our first step was to extract from the corpus only sentences containing one of those keywords. Our second observation was that smoking status can typically be deduced from several words surrounding the keyword. Thus, it might be possible to prune very long sentences to subsentences surrounding the keyword without loss of information. This observation allows reserving only a single line to display each relevant sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our third observation is that the space of possible smoking-related sentences occurring in clinical notes is relatively limited and that for any smokingrelated sentence there are likely very similar sentences in the corpus. We hypothesized that displaying similar sentences next to each other would allow human annotators to process the text much faster than if sentences are shown in random order. Our fourth observation is that some common discriminative keywords reveal the smoking status, such as denies, quit, former, packs. We hypothesized that highlighting those keywords in the text could allow a human annotator to work faster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our final observation was that by training a predictive model on the currently available labels, even when the number of available labels is relatively Figure 1 : An illustration of the proposed sequence visualization approach for rapid labeling. The predicted labels for each sentence are shown inside the yellow boxes. where N refers to Non-Smoker, F to Former Smoker, and S to Smoker. Only the 5th sentence in the bottom panel is misclassified by the current prediction model and has to be overwritten by a human annotator. small, would likely result in prediction accuracy that is significantly higher than a baseline that assigns labels randomly or based on the majority class labels. Thus, providing labels obtained by the current prediction model would allow a human annotator to skip the correctly labeled sentences and only enter the labels for the incorrectly labeled ones. As the number of labels grows, the accuracy of the prediction model is expected to increase, and the effort to correct the labels would decrease, thus increasing the speed of labeling.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The resulting visualization approach developed by exploiting the stated observations is illustrated in Figure 1 . A panel at the top shows 7 randomly selected smoking-related sentences from our corpus. A panel at the bottom shows the same sentences displayed using our approach. The main features of our visualization approach are (1) sentence ordering, (2) sentence centering around the smoking keyword, (3) text annotation to emphasize discriminative keywords, and (4) displaying of the predicted labels. We are claiming, and our user study (described in Section 4) confirms it, that the bottom panel makes it much easier and faster for a human annotator to label a large corpus of smoking related sentences for the smoking status of a patient.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To produce the bottom panel in Figure 1 , we had to decide (1) what are the smoking keywords, 2what keywords are discriminative of the smoking status, (3) how to order the sentences, (4) how to provide predicted labels, (5) what to do during the cold start when no or very few sentences are labeled, and (6) how to implement the visualization approach. Details about the proposed approach are provided in Section 3. In Section 2 we provide a brief overview of the related work. In Section 4 we describe the experimental design, explain our user study, and provide experimental results that convincingly indicate the usefulness of the proposed approach.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Extracting smoking status of patients from Electronic Health Records [EHR] has been crucial in clinical settings, and especially useful to health care providers to select the best care plan for patients at risk of smoking-related diseases. (Rajendran and Topaloglu, 2020) investigates the application of three Deep Learning models on EHR data to extract the smoking status of patients. Authors compare their approach with traditional machine learning models on both binary (Smoker vs Non-Smoker) and multi-class classification (Current Smoker vs. Former Smoker vs. Non-smoker) tasks. (Wang et al., 2016) extracts smoking status from three different sources such as narrative texts, patient-provided-information, and diagnosis codes. They conclude that narrative text proves to be the most useful source for smoking status extraction. (Palmer et al., 2019; Hegde et al., 2018) develop rule-based algorithms to determine tobacco use by patients. (Palmer et al., 2019) further identify the cessation date and smoking intensity of patients. Common for the aforementioned work on smoking status extraction is a need to label sentences and train an appropriate machine learning model. None of those papers discuss issues related to labeling nor attempt to reduce labeling costs.", "cite_spans": [ { "start": 69, "end": 74, "text": "[EHR]", "ref_id": null }, { "start": 584, "end": 603, "text": "(Wang et al., 2016)", "ref_id": "BIBREF16" }, { "start": 834, "end": 855, "text": "(Palmer et al., 2019;", "ref_id": "BIBREF11" }, { "start": 856, "end": 875, "text": "Hegde et al., 2018)", "ref_id": "BIBREF3" }, { "start": 944, "end": 965, "text": "(Palmer et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A common approach to annotate a large amount of data is through crowdsourcing (Fang et al., 2014; Good and Su, 2013; Lim et al., 2020) . It has been used in variety of tasks such as Image Classification (Fang et al., 2014) , Bioinformatics (Good and Su, 2013), and Text mining . Although crowdsourcing is a cost-effective way to collect labeled data, it can still be costly when the required labeling effort is significant. Moreover, when using imperfect annotators with varying levels of expertise, it is important to develop appropriate label integration approaches (Settles, 2011) . Beyond the crowdsourcing issues, one popular approach to reduce labeling costs is to apply Active Learning and label only the most informative examples (Fang et al., 2014) .", "cite_spans": [ { "start": 78, "end": 97, "text": "(Fang et al., 2014;", "ref_id": "BIBREF1" }, { "start": 98, "end": 116, "text": "Good and Su, 2013;", "ref_id": "BIBREF2" }, { "start": 117, "end": 134, "text": "Lim et al., 2020)", "ref_id": "BIBREF8" }, { "start": 203, "end": 222, "text": "(Fang et al., 2014)", "ref_id": "BIBREF1" }, { "start": 568, "end": 583, "text": "(Settles, 2011)", "ref_id": "BIBREF15" }, { "start": 738, "end": 757, "text": "(Fang et al., 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "More recently, Human-In-the-Loop [HIL] approaches were proposed to improve the efficiency of annotation (Klie et al., 2020; Kim and Pardo, 2018) . (Kim and Pardo, 2018 ) present a HIL system for sound event detection, which directs the annotator's attention to the most promising regions of an audio clip for labeling. (Klie et al., 2020) apply a similar technique on Entity Linking [EL] task, in which the machine learning component makes recommendations about the most relevant entries in a knowledge base, and the annotator selects the correct candidate. The recommender improves itself based on the obtained feedback. In addition, (Qian et al., 2020) present an interface for entity normalization annotation in which they measure the number of clicks in a tool to quantify the human effort.", "cite_spans": [ { "start": 104, "end": 123, "text": "(Klie et al., 2020;", "ref_id": "BIBREF6" }, { "start": 124, "end": 144, "text": "Kim and Pardo, 2018)", "ref_id": "BIBREF5" }, { "start": 147, "end": 167, "text": "(Kim and Pardo, 2018", "ref_id": "BIBREF5" }, { "start": 319, "end": 338, "text": "(Klie et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "While many papers attempt to minimize labeling effort, a vast majority of them are measuring the effort by counting the number of labeled examples. There are very few papers (Zhang et al., 2019 ) that measure labeling effort in terms of elapsed time.", "cite_spans": [ { "start": 174, "end": 193, "text": "(Zhang et al., 2019", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The uniqueness of our work is in demonstrating that annotation speed can be significantly impacted by the way data is presented to an annotator. Furthermore, our work is specific in its focus on an extreme labeling scenario where the task is to label the complete corpus in order to maximize the prediction accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Problem Definition: Given a document corpus D representing clinical notes of patients from which a set of N unlabeled smoking-related sentences S 1 , S 2 , ..., S N is extracted, the goal is to ask human annotators to label all N sentences for smoking status. There are 4 types of labels: Smoker (S), Non-Smoker (N), Former Smoker (F), and Other (O), where Others refer to sentences that do not reveal the smoking status.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "In this section, we describe a visualization approach that improves human annotation speed. The main components of the approach are sequence ordering, label prediction, and text visualization. The details are explained in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Our goal is to order sentences in a computationallyefficient manner by combining clustering and alignment algorithms. We use clustering to find groups of similar sequences that will subsequently be ordered with help of an alignment algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ordering", "sec_num": "3.1" }, { "text": "In order to cluster sentences, we rely on their vector embeddings. In particular, we use sequence embeddings of the pre-trained BERT model (Devlin et al., 2019) . K-Means Clustering, whose computational cost is O(N ) as implemented by (Pedregosa et al., 2011) , is used to find k clusters, where k is selected such that the average cluster size is limited to a specified size.", "cite_spans": [ { "start": 139, "end": 160, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF0" }, { "start": 235, "end": 259, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Ordering", "sec_num": "3.1" }, { "text": "Sentences in each cluster are then ordered, such that neighboring sentences are perceived by a human annotator to be as similar as possible. Rather than ordering sentences based on BERT embeddings, we instead resort to sequence alignment distance, which we hypothesize are closer to human perception of similarity. In particular, we apply Needleman-Wunsch algorithm [NWA] 1 (Needleman and Wunsch, 1970), which is a dynamic programming algorithm that finds a similarity score between a pair of sentences in O(L 2 ) time, where L is the length of a sentence, For each cluster, we create a pairwise score matrix, Score, of size N c \u00d7N c , where N c is the number of sequences within the cluster c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ordering", "sec_num": "3.1" }, { "text": "To find the order of the sentences in each cluster, we apply the following greedy algorithm. It starts by selecting the first sentence at random. The next sentence is its nearest neighbor, according to Score matrix. The process continues by adding the nearest neighbors of previous sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ordering", "sec_num": "3.1" }, { "text": "Once the sentences are sorted, our next objective is to display them in a way that reduces the cognitive load of a human annotator. Our first idea is to center the sequences around smoking-related keywords such as Smoke, Smoking, Tobacco, Nicotine. We find those keywords by applying word2vec (Mikolov et al., 2013) to our document corpus D and by finding neighbors of word Smoke in the resulting embedding. Then, we manually select neighbors that are indicative of smoking-related sentences.", "cite_spans": [ { "start": 293, "end": 315, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Visualization", "sec_num": "3.2" }, { "text": "According to the maximum screen width, we align the sentences such that the smoking keyword appears in the middle of the screen. In addition, we fill the empty spaces before the sentence starts with dashes (-) to improve readability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Visualization", "sec_num": "3.2" }, { "text": "Our labeling approach proceeds in batches. After selecting the first batch of M unlabeled sentences at random (in our experiments we use M = 200), we do not display any predicted labels and orders. After we obtain labels for the first batch, we train a baseline machine learning model such as logistic regression using the bag of words representation (in our experiments we used the most frequent 500 non-stop words). Then, we analyze the statistical significance of the logistic regression weights and select K words associated with the most significant weights as discriminative words. Examples of discriminative keywords are cigarette, denies, quit, former, packs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Visualization", "sec_num": "3.2" }, { "text": "We select the second batch of unlabeled sentences at random, order them, and display them centered with the discriminative words in bold red font to improve readability. In addition, we display the predicted labels by the logistic regression next to the ordered sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Visualization", "sec_num": "3.2" }, { "text": "Rather than building a specialized sentence visualization and annotation tool, we use MS Ex-cel 2 . Each sentence occupies one row in the Excel spreadsheet, where the first column is reserved for prediction labels, and the second column is reserved for the centered annotated sentences. An advantage of Excel is that it enables the use of the built-in cell drag feature to quickly change annotations of neighboring sentences. In addition, we use Courier as the font format, since it is a monospaced font type. The monospaced font displays each character or letter in the same amount of horizontal space. As a result, it makes the alignment and centering precise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Visualization", "sec_num": "3.2" }, { "text": "We continue selecting batches, labeling them, and retraining the prediction models. Once the number of labels becomes sufficiently large (1, 000 in our experiments) we replace logistic regression with deep learning. We also allow for the batches to become larger over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Visualization", "sec_num": "3.2" }, { "text": "We performed our experiments using 52,726 discharge notes from the MIMIC-III dataset (Johnson et al., 2016) , which contains de-identified records of the Beth Israel Deaconess Medical Center's Intensive Unit emergency department patients from 2001 to 2012.", "cite_spans": [ { "start": 85, "end": 107, "text": "(Johnson et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "We defined smoking-related keywords by selecting keyword smoke and its selected word2vec nearest neighbors. We collected 26 unique keywords. Using those keywords, we found 34,149 unique matching sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "4" }, { "text": "We evaluate the effectiveness of our proposed approach in three different rounds of labeling. We performed a user study with 2 human annotators (the first two co-authors of this paper) to measure labeling time in each of the 3 rounds of labeling. The total number of sentences annotated by each user in our experiments was 3,000 sentences each. In addition, in Section 4.2, we performed an ablation study to analyze the impact of different components of the proposed visualization approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "In addition to labeling time, we also report the labeling rate, which is the number of sentences labeled per minute: In the following subsections, we explain the basics of each baseline method as well as the experimental design for each round of labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "Rate = # of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "In this round of the experiment, we select 200 random sentences. We display them in the same way as it is shown in the upper panel in Figure 1 . Once we obtain the labels from the first batch, we train a logistic regression model. The first row of Table 1 shows the annotation details.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 1", "ref_id": null }, { "start": 248, "end": 255, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Round 1", "sec_num": "4.1.1" }, { "text": "We asked users to annotate 800 sentences in 4 batches. We chose the Latin square design to proceed as unordered, ordered, ordered, and unordered batches. We have also use logistic regression model to predict the labels for all the batches. Table 1 demonstrates the result of this round.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Round 2", "sec_num": "4.2" }, { "text": "On average, the annotation rate using our method is 1.9\u00d7 compared to round 1. Additionally, it is 1.5\u00d7 faster compared to the unordered set in Round 2. By repeating the annotation task in batches 3 and 4, we can speed up the rate in our method by 15% (from 33 to 38) and in the unordered set by 14% (from 21 to 24).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Round 2", "sec_num": "4.2" }, { "text": "We annotated 2,000 sentences in 4 batches, each batch containing 500 sentences. Similar to Round 2, we set up the experiments with the Latin Triangle mixture design (unordered, ordered, ordered, unordered) .", "cite_spans": [ { "start": 165, "end": 205, "text": "(unordered, ordered, ordered, unordered)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Round 3", "sec_num": "4.2.1" }, { "text": "Given the annotated data from Round 1 and 2, we replaced the classifier with a deep learning algorithm. We use the Clinical BERT, which is pretrained on all the discharge summary notes in the MIMIC dataset. We split the data into 800 training and 200 for testing. The hyperparameters are selected according to (Devlin et al., 2019) . We set the batch size to 16, learning rate to 2e\u22125, maximum sentence length to 200, and fine-tuned it for 4 epochs. We have also performed experiments with SVM, logistic regression. Table 3 demonstrates the performance of all the classifiers.", "cite_spans": [ { "start": 310, "end": 331, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 516, "end": 523, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Round 3", "sec_num": "4.2.1" }, { "text": "According to Table 2 , the annotation rate increased from Round 2 to Round 3 by 29% (from 35.5 to 46) with our approach. However, it increased by 16% (from 22.5 in Round 2 to 27 in Round 3) using the baseline approach.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Round 3", "sec_num": "4.2.1" }, { "text": "Comparing the annotation speed in Round 3, our approach is 1.7\u00d7 faster than the baseline (46 compared to 27). Since the size of the batches increased in Round 3, there was more redundancy in the sentences and our approach was more helpful to the annotators than in Round 2. In particular, ordering resulted in smoother transitions between sentences, which contributed to faster human annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Round 3", "sec_num": "4.2.1" }, { "text": "Last but not the least, by repeating the labeling task, we expect users to get used to the data, and therefore, we expected the annotation rate to increase regardless of the visualization approach. Confirming this assumption, users on average got 19% faster with our method during Round 3 (rate increased from 42 to 50), while they got only 7% faster with the baseline approach (rate increased from 26 to 28). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Round 3", "sec_num": "4.2.1" }, { "text": "In this section, we analyze the impact of two components of our system on the final annotation rate. We asked one of the users to annotate an additional 1,000 sentences. We split the set into two groups, each group with 500 samples. First, we studied the impact of centering. Therefore, we aligned all the data to the left and kept the ordering and feature visualization. Second, we removed the feature visualization component, and kept the ordering and centering. Table 4 shows the results of these two experiments. Table 4 : Ablation study on the impact of centering and feature visualization. In the first row, we do not center the sentences around the smoke keywords. In the second row, we do not highlight the important features.", "cite_spans": [], "ref_spans": [ { "start": 465, "end": 472, "text": "Table 4", "ref_id": null }, { "start": 517, "end": 524, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.3" }, { "text": "According to the results for Round 2 in Table 2 , the highest rate for User 2 was 24 sentences per minute. However, when we removed the centering component, the rate decreased by 8%, to 22 per minute. In addition, by removing the coloring component, the rate decreased by 4%, to 23 per minute.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 49, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.3" }, { "text": "The centering component had a stronger impact on the labeling rate than the coloring component. However, both of the removals reduced the rate of labeling. Given the annotated data from the ablation study, and adding all the labeled data from the first and second rounds, we re-trained all the classifiers on 3,400 training sentences and used 600 sentences for testing. We observed 15% improvement in the BERT model accuracy and 3% improvement in the Logistic Regression model accuracy compared to the models trained on Round data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.3" }, { "text": "We presented a visualization approach that enables rapid annotation of sentences for smoking status of patients. Our framework contains three main components: sentence ordering, sentence presentation, and sentence labeling by the prediction model. Our approach does not depend on high-quality ML predictors to provide initial labels. The display has a significant impact on speeding up the annotation process. We evaluated our visualization approach with a user study on sentences from MIMIC-III discharge summaries. We achieved close to 3\u00d7 faster annotation rate compared to the baseline method that displayed sentences randomly in their original shape. As the annotation progressed, as the batches of unlabeled sentences became larger, and as the prediction models improved, the annotation speed kept increasing in our user experiments. The proposed visualization approach is applicable to similar text classification tasks. It is a topic of further research to study how to modify the presented approach to make it applicable to a large number of text annotation tasks in natural language processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://emboss.sourceforge.net/docs/ emboss_tutorial/node3.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.microsoft.com/en-us/ microsoft-365/excel", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Active learning for crowdsourcing using knowledge transfer", "authors": [ { "first": "Meng", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Dacheng", "middle": [], "last": "Tao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "28", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Fang, Jie Yin, and Dacheng Tao. 2014. Active learning for crowdsourcing using knowledge trans- fer. Proceedings of the AAAI Conference on Artifi- cial Intelligence, 28(1).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Crowdsourcing for bioinformatics", "authors": [ { "first": "M", "middle": [], "last": "Benjamin", "suffix": "" }, { "first": "Andrew", "middle": [ "I" ], "last": "Good", "suffix": "" }, { "first": "", "middle": [], "last": "Su", "suffix": "" } ], "year": 2013, "venue": "Bioinformatics", "volume": "29", "issue": "16", "pages": "1925--1933", "other_ids": { "DOI": [ "10.1093/bioinformatics/btt333" ] }, "num": null, "urls": [], "raw_text": "Benjamin M. Good and Andrew I. Su. 2013. Crowd- sourcing for bioinformatics. Bioinformatics, 29(16):1925-1933.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Tobacco use status from clinical notes using natural language processing and rule based algorithm", "authors": [ { "first": "Harshad", "middle": [], "last": "Hegde", "suffix": "" }, { "first": "Neel", "middle": [], "last": "Shimpi", "suffix": "" }, { "first": "Ingrid", "middle": [], "last": "Glurich", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Acharya", "suffix": "" } ], "year": 2018, "venue": "Technology and Health Care", "volume": "26", "issue": "3", "pages": "445--456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harshad Hegde, Neel Shimpi, Ingrid Glurich, and Amit Acharya. 2018. Tobacco use status from clin- ical notes using natural language processing and rule based algorithm. Technology and Health Care, 26(3):445-456.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mimiciii, a freely accessible critical care database", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Alistair", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "J", "middle": [], "last": "Tom", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "H Lehman", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Mengling", "middle": [], "last": "Li-Wei", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Ghassemi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Moody", "suffix": "" }, { "first": "Leo", "middle": [ "Anthony" ], "last": "Szolovits", "suffix": "" }, { "first": "Roger G", "middle": [], "last": "Celi", "suffix": "" }, { "first": "", "middle": [], "last": "Mark", "suffix": "" } ], "year": 2016, "venue": "Scientific data", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic- iii, a freely accessible critical care database. Scien- tific data, 3:160035.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A human-in-theloop system for sound event detection and annotation", "authors": [ { "first": "Bongjun", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Pardo", "suffix": "" } ], "year": 2018, "venue": "ACM Transactions on Interactive Intelligent Systems (TiiS)", "volume": "8", "issue": "2", "pages": "1--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bongjun Kim and Bryan Pardo. 2018. A human-in-the- loop system for sound event detection and annota- tion. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(2):1-23.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "From Zero to Hero: Human-In-The-Loop Entity Linking in Low Resource Domains", "authors": [ { "first": "Jan-Christoph", "middle": [], "last": "Klie", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Eckart De Castilho", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6982--6993", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.624" ] }, "num": null, "urls": [], "raw_text": "Jan-Christoph Klie, Richard Eckart de Castilho, and Iryna Gurevych. 2020. From Zero to Hero: Human- In-The-Loop Entity Linking in Low Resource Do- mains. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6982-6993, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A neural model for aggregating coreference annotation in crowdsourcing", "authors": [ { "first": "Maolin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5760--5773", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.507" ] }, "num": null, "urls": [], "raw_text": "Maolin Li, Hiroya Takamura, and Sophia Ananiadou. 2020. A neural model for aggregating corefer- ence annotation in crowdsourcing. In Proceedings of the 28th International Conference on Compu- tational Linguistics, pages 5760-5773, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Annotating and analyzing biased sentences in news articles using crowdsourcing", "authors": [ { "first": "Sora", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Jatowt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "F\u00e4rber", "suffix": "" }, { "first": "Masatoshi", "middle": [], "last": "Yoshikawa", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1478--1484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sora Lim, Adam Jatowt, Michael F\u00e4rber, and Masatoshi Yoshikawa. 2020. Annotating and ana- lyzing biased sentences in news articles using crowd- sourcing. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 1478- 1484, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", "authors": [ { "first": "B", "middle": [], "last": "Saul", "suffix": "" }, { "first": "Christian", "middle": [ "D" ], "last": "Needleman", "suffix": "" }, { "first": "", "middle": [], "last": "Wunsch", "suffix": "" } ], "year": 1970, "venue": "Journal of molecular biology", "volume": "48", "issue": "3", "pages": "443--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saul B Needleman and Christian D Wunsch. 1970. A general method applicable to the search for simi- larities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3):443-453.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Building a tobacco user registry by extracting multiple smoking behaviors from clinical notes. BMC medical informatics and decision making", "authors": [ { "first": "L", "middle": [], "last": "Ellen", "suffix": "" }, { "first": "Saeed", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "John", "middle": [], "last": "Hassanpour", "suffix": "" }, { "first": "Jennifer", "middle": [ "A" ], "last": "Higgins", "suffix": "" }, { "first": "Tracy", "middle": [], "last": "Doherty", "suffix": "" }, { "first": "", "middle": [], "last": "Onega", "suffix": "" } ], "year": 2019, "venue": "", "volume": "19", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen L Palmer, Saeed Hassanpour, John Higgins, Jen- nifer A Doherty, and Tracy Onega. 2019. Building a tobacco user registry by extracting multiple smok- ing behaviors from clinical notes. BMC medical in- formatics and decision making, 19(1):1-10.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An intuitive user interface for human-in-the-loop entity name parsing and entity variant generation", "authors": [ { "first": "Lucian", "middle": [], "last": "Kun Qian", "suffix": "" }, { "first": "Yunyao", "middle": [], "last": "Popa", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of (DaSH@KDD). Association for Computing Machinery", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Qian, Lucian Popa, and Yunyao Li. 2020. An intuitive user interface for human-in-the-loop entity name parsing and entity variant generation. In Pro- ceedings of (DaSH@KDD). Association for Com- puting Machinery.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extracting smoking status from electronic health records using nlp and deep learning", "authors": [ { "first": "Suraj", "middle": [], "last": "Rajendran", "suffix": "" }, { "first": "Umit", "middle": [], "last": "Topaloglu", "suffix": "" } ], "year": 2020, "venue": "AMIA Summits on Translational Science Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suraj Rajendran and Umit Topaloglu. 2020. Extracting smoking status from electronic health records using nlp and deep learning. AMIA Summits on Transla- tional Science Proceedings, 2020:507.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1467--1478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2011. Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In Proceedings of the 2011 Confer- ence on Empirical Methods in Natural Language Processing, pages 1467-1478, Edinburgh, Scotland, UK. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Comparison of three information sources for smoking information in electronic health records", "authors": [ { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoyang", "middle": [], "last": "Ruan", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Cancer informatics", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liwei Wang, Xiaoyang Ruan, Ping Yang, and Hong- fang Liu. 2016. Comparison of three information sources for smoking information in electronic health records. Cancer informatics, 15:CIN-S40604.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "How to invest my time: Lessons from human-in-the-loop entity extraction", "authors": [ { "first": "Shanshan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "He", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dragut", "suffix": "" }, { "first": "Slobodan", "middle": [], "last": "Vucetic", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery amp", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3292500.3330773" ] }, "num": null, "urls": [], "raw_text": "Shanshan Zhang, Lihong He, Eduard Dragut, and Slo- bodan Vucetic. 2019. How to invest my time: Lessons from human-in-the-loop entity extraction. In Proceedings of the 25th ACM SIGKDD Inter- national Conference on Knowledge Discovery amp;", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Data Mining, KDD '19", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2305--2313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Data Mining, KDD '19, page 2305-2313, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "TABREF1": { "text": "The annotation results in Round 1 and 2. The experiments are conducted in the same order as the numbers indicate. Each group contains 200 sentences. Unordered refers to the baseline, and Ordered is our visualization approach.", "type_str": "table", "content": "
Groups and SettingsUser 1 (mins)User 2 (mins)Rate User 1 (Sent/min)Rate User 2 (Sent/min)Total rate (Sent/min)
Batch1 (Unordered) 4035121426
Batch2 (Ordered)2323212142
Batch3 (Ordered)1920262450
Batch4 (Unordered) 3434141428
", "html": null, "num": null }, "TABREF2": { "text": "The results for Round 3. The experiments are conducted in the same order as the numbers indicate. Each Group contains 500 samples. The labels for these experiments are provided by fine-tuned Clinical BERT model. Unordered refers to the baseline, and Ordered is our visualization approach.", "type_str": "table", "content": "", "html": null, "num": null }, "TABREF4": { "text": "All the classifiers are trained to predict 4 classes: Smoker, NonSmoker, Former, and Other. Baseline accuracy is the fraction of the majority class in the test set. In Round 1, there are 800 training and 200 test sentences. In Round 2, there are 3,400 training and 600 test sentences.", "type_str": "table", "content": "
", "html": null, "num": null } } } }