ACL-OCL / Base_JSON /prefixB /json /bionlp /2020.bionlp-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:01.380230Z"
},
"title": "Noise Pollution in Hospital Readmission Prediction: Long Document Classification with Reinforcement Learning",
"authors": [
{
"first": "Liyan",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": "liyan.xu@emory.edu"
},
{
"first": "Julien",
"middle": [],
"last": "Hogan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University",
"location": {
"settlement": "Atlanta",
"region": "US"
}
},
"email": "julien.hogan@emory.edu"
},
{
"first": "Rachel",
"middle": [],
"last": "Patzer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University",
"location": {
"settlement": "Atlanta",
"region": "US"
}
},
"email": "rpatzer@emory.edu"
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": "",
"affiliation": {},
"email": "jinho.choi@emory.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a reinforcement learning approach to extract noise in long clinical documents for the task of readmission prediction after kidney transplant. We face the challenges of developing robust models on a small dataset where each document may consist of over 10K tokens with full of noise including tabular text and task-irrelevant sentences. We first experiment four types of encoders to empirically decide the best document representation, and then apply reinforcement learning to remove noisy text from the long documents, which models the noise extraction process as a sequential decision problem. Our results show that the old bag-of-words encoder outperforms deep learning-based encoders on this task, and reinforcement learning is able to improve upon baseline while pruning out 25% text segments. Our analysis depicts that reinforcement learning is able to identify both typical noisy tokens and task-specific noisy text.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a reinforcement learning approach to extract noise in long clinical documents for the task of readmission prediction after kidney transplant. We face the challenges of developing robust models on a small dataset where each document may consist of over 10K tokens with full of noise including tabular text and task-irrelevant sentences. We first experiment four types of encoders to empirically decide the best document representation, and then apply reinforcement learning to remove noisy text from the long documents, which models the noise extraction process as a sequential decision problem. Our results show that the old bag-of-words encoder outperforms deep learning-based encoders on this task, and reinforcement learning is able to improve upon baseline while pruning out 25% text segments. Our analysis depicts that reinforcement learning is able to identify both typical noisy tokens and task-specific noisy text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Prediction of hospital readmission has always been recognized as an important topic in surgery. Previous studies have shown that the post-discharge readmission takes up tremendous social resources, while at least a half of the cases are preventable (Basu Roy et al., 2015; Jones et al., 2016) . Clinical notes, as part of the patients' Electronic Health Records (EHRs), contain valuable information but are often too time-consuming for medical experts to manually evaluate. Thus, it is of significance to develop prediction models utilizing various sources of unstructured clinical documents.",
"cite_spans": [
{
"start": 249,
"end": 272,
"text": "(Basu Roy et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 273,
"end": 292,
"text": "Jones et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task addressed in this paper is to predict 30day hospital readmission after kidney transplant, which we treat it as a long document classification problem without using specific domain knowledge. The data we use is the unstructured clinical documents of each patient up to the date of discharge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we face three types of challenges in this task. First, the document size can be very long; documents associated with these patients can have tens of thousands of tokens. Second, the dataset is relatively small with fewer than 2,000 patients available, as kidney transplant is a non-trivial medical surgery. Third, the documents are noisy, and there are many target-irrelevant sentences and tabular data in various text forms (Section 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The lengthy documents together with the small dataset impose a great challenge on representation learning. In this work, we experiment four types of encoders: bag-of-words (BoW), averaged word embedding, and two deep learning-based encoders that are ClinicalBERT (Huang et al., 2019) and LSTM with weight-dropped regularization (Merity et al., 2018) . To overcome the long sequence issue, documents are split into multiple segments for both ClinicalBERT and LSTM (Section 4) .",
"cite_spans": [
{
"start": 263,
"end": 283,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 328,
"end": 349,
"text": "(Merity et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 441,
"end": 474,
"text": "ClinicalBERT and LSTM (Section 4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After we observe the best performed encoders, we further propose to combine reinforcement learning (RL) to automatically extract out task-specific noisy text from the long documents, as we observe that many text segments do not contain predictive information such that removing these noise can potentially improve the performance. We model the noise extraction process as a sequential decision problem, which also aligns with the fact that clinical documents are received in time-sequential order. At each step, a policy network with strong entropy regularization (Mnih et al., 2016) decides whether to prune the current segment given the context, and the reward comes from a downstream classifier after all decisions have been made (Section 5).",
"cite_spans": [
{
"start": 564,
"end": 583,
"text": "(Mnih et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Empirical results show that the best performed encoder is BoW, and deep learning approaches suffer from severe overfitting under huge feature space in contrast of the limited training data. RL is experimented on this BoW encoder, and able to improve upon baseline while pruning out around 25% Table 1 : Statistics of our dataset with respect to different types of clinical notes. P: # of patients, T: avg. # of tokens, CO: Consultations, DS: Discharge Summary, EC: Echocardiography, HP: History and Physical, OP: Operative, PG: Progress, SC: Selection Conference, SW: Social Worker. The report for SC is written by the committee that consists of surgeons, nephrologists, transplant coordinators, social workers, etc. at the end of the transplant evaluation. All 8 types follow the approximately 3:7 positive-negative class distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "text segments (Section 6). Further analysis shows that RL is able to identify traditional noisy tokens with few document frequencies (DF), as well as task-irrelevant tokens with high DF but of little information (Section 7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is based on the Emory Kidney Transplant Dataset (EKTD) that contains structured chart data as well as unstructured clinical notes associated with 2,060 patients. The structured data comprises 80 features that are lab results before the discharge as well as the binary labels of whether each patient is readmitted within 30 days after kidney transplant or not where 30.7% patients are labeled as positive. The unstructured data includes 8 types of notes such that all patients have zero to many documents for each note type. It is possible to develop a more accurate prediction model by co-training the structured and unstructured data; however, this work focuses on investigating the potentials of unstructured data only, which is more challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "As the clinical notes are collected through various sources of EMRs, many noisy documents exist in EKTD such that 515 documents are HTML pages and 303 of them are duplicates. These documents are removed during preprocessing. Moreover, most documents contain not only written text but also tabular data, because some EMR systems can only export entire documents in the table format. While there are many tabular texts in the documents (e.g., lab results and prescription as in Table 2 ), it is impractical to write rules to filter them out, as the exported formats are not consistent across EMRs. Thus, any tokens containing digits or symbols, except for one-character tokens, are removed during Lab Fishbone (BMP, CBC, CMP, Diff) and critical labs -Last 24 hours 03/08/2013 12:45 142(Na) 104(Cl) 70H(BUN) -10.7L(Hgb) < 92(Glu) 6.5(WBC) 137L(Plt) 3.6(K) 26(CO2) preprocessing. Although numbers may provide useful features, most quantitative measurements are already included in the structured data so that those features can be better extracted from the structured data if necessary. The remaining tabular text contains headers and values that do not provide much helpful information and become another source of noise, which we handle by training a reinforcement learning model to identify them (Section 5). Table 1 gives the statistics of each clinical note type after preprocessing. The average number of tokens is measured by counting tokens in all documents from the same note type of each patient. Given this preprocessed dataset, our task is to take all documents in each note type as a single input and predict whether or not the patient associated with those documents will be readmitted.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 483,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1308,
"end": 1315,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.1"
},
{
"text": "Shin et al. (2019) presented ensemble models utilizing both the structured and the unstructured data in EKTD, where separate logistic regression (LR) models are trained on the structured data and each type of notes respectively, and the final prediction of each patient is obtained by averaging predictions from each models. Since some patients may lack documents from certain note types, prediction on these note types are simply ignored in the averaging process. For the unstructured notes, concatenation of Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Dirichlet Allocation (LDA) representation is fed into LR. However, we have found that the representation from LDA only contributes marginally, while LDA takes significantly more inferring time. Thus, we drop LDA and only use TF-IDF as our BoW encoder (Section 4.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Various deep learning models regarding text classification have been proposed in recent years. Pretrained language models like BERT have shown state-of-the-art performance on many NLP tasks (Devlin et al., 2019) . ClinicalBERT is also introduced on the medical domain (Huang et al., 2019) . However, deep learning approaches have two drawbacks on this particular dataset. First, deep learning requires large dataset to train, whereas most of our unstructured note types only have fewer than 2,000 samples. Second, these approaches are not designed for long documents, and difficult to keep long-term dependencies over thousands of tokens.",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 268,
"end": 288,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Reinforcement learning has been explored to combat data noise by previous work (Zhang et al., 2018; on the short text setting. A policy network makes decision left-to-right over tokens, and is jointly trained with another classifier. However, there is little investigation of using RL on the long text setting, as it still requires an effective encoder to give meaningful representation of long documents. Therefore, in our experiments, the first step is to select the best encoder, and then apply RL on the long document classification.",
"cite_spans": [
{
"start": 79,
"end": 99,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "For the baseline model, the bag-of-words representation with TF-IDF scores, excluding stopwords (Nothman et al., 2018) , is fed into logistic regression (LR). The objective is to minimize the negative log likelihood of the gold label y i :",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "(Nothman et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words",
"sec_num": "4.1"
},
{
"text": "\u2212 1 m m i=1 [y i log p(g i )+(1\u2212y i ) log 1 \u2212 p(g i )] (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words",
"sec_num": "4.1"
},
{
"text": "where g i is the TF-IDF representation of D i . In addition, we experiment two common techniques in the encoder to reduce feature space: token stemming, and document frequency cutoff.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words",
"sec_num": "4.1"
},
{
"text": "Word embeddings generated by fastText are used to establish another baseline, that utilizes subwords to better represent unseen terms (Bojanowski et al., 2017) . It is suitable for this task as unseen terms or misspellings frequently appear in these clinical notes. The averaged word embedding is used to represent the input document consisting of multiple notes, which gets fed into LR with the same training objective.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Averaged Word Embedding",
"sec_num": "4.2"
},
{
"text": "Following Huang et al. 2019, the pretrained language BERT model (Devlin et al., 2019) is first tuned on the MIMIC-III clinical note corpus (Johnson et al., 2016) , which has shown to provide better related word similarities in medical domains. Then, a dense layer is added on the CLS token of the last BERT layer. The entire parameters are fine-tuned to optimize the binary cross entropy loss, that is the same objective as Equation 1.",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 139,
"end": 161,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ClinicalBERT",
"sec_num": "4.3"
},
{
"text": "Since BERT has a limit on the input length, the input document of each patient is split into multiple subsequences. Each subsequence is within the BERT length limit, and serves as an independent sample with the same label of the patient. The training data is therefore noisily inflated. The final probability of readmission is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ClinicalBERT",
"sec_num": "4.3"
},
{
"text": "p(y i = 1|g i ) = p n i max + p n i mean n i /c 1 + n i /c (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ClinicalBERT",
"sec_num": "4.3"
},
{
"text": "where g i is the BERT representation of patient i, n i is the corresponding number of subsequences, and c is a hyperparameter to control the influence of n i . p n i max and p n i mean are the max and mean probability across the subsequences, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ClinicalBERT",
"sec_num": "4.3"
},
{
"text": "The motivation behind balancing between the max and mean probability is that subsequences do not contain equal information. p n i max represents the best potential, while longer text should give more importance to p n i mean , because p n i max is more easily affected by noise as the text length grows. Although Equation 2 seems intuitive, the use of pseudo labels on subsequences becomes another source of noise, especially when there are thousands of tokens; thus, the performance is uncertain. Section 6.2 provides detailed empirical analysis for this model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ClinicalBERT",
"sec_num": "4.3"
},
{
"text": "We split documents of each patient into multiple short segments, and feed the segment representation to long short-term memory network (LSTM) at each time step:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight-dropped LSTM",
"sec_num": "4.4"
},
{
"text": "h j \u2190 LSTM(s j , h j\u22121 ; \u03b8) (3) s t \u27f6 a t a 1 a 2 \u22ef a t \u22ef a T s 1 \u2192 s 2 \u2192 \u22ef \u2192 s t \u2192 \u22ef \u2192 s T \u2193 a t s t+1 g \u03c4 \u27f6 p(y | g \u03c4 ) g \u03c4 Selected Text Action State Prediction Reward Pruning Reward Policy Network Environment Downstream Classifier \u03c0 \u03b8 \u2193 \u2193 \u2193",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight-dropped LSTM",
"sec_num": "4.4"
},
{
"text": "Figure 1: Overview of our reinforcement learning approach. Rewards are calculated and sent back to the policy network after all actions a 1:T have been sampled for the given episode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight-dropped LSTM",
"sec_num": "4.4"
},
{
"text": "where h j is the hidden state at time step j, s j is the jth segment, and \u03b8 is the set of parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight-dropped LSTM",
"sec_num": "4.4"
},
{
"text": "Although segmentation of documents is still necessary, no pseudo labels are needed. We get the segment representation by averaging its token embedding from the last layer of BERT. The final hidden state at each step j is the concatenated hidden states of a single-layer Bi-directional LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight-dropped LSTM",
"sec_num": "4.4"
},
{
"text": "After we get the hidden state for each segment, a max-pooling operation is performed on h 1:n over the time dimension to obtain a fixed-length vector, similar to Kim (2014) ; Adhikari et al. (2019) . A dense layer is immediately followed. It is particularly important to strengthen regularization on this dataset with small sample size. Dropout (Srivastava et al., 2014) as a way of regularization has been shown effective in deep learning models, and Merity et al. (2018) has successfully applied dropout-like technique in LSTM: the use of DropConnect (Wan et al., 2013) is applied on the four hidden-to-hidden matrices, preventing overfitting from occurring on the recurrent weights.",
"cite_spans": [
{
"start": 162,
"end": 172,
"text": "Kim (2014)",
"ref_id": "BIBREF7"
},
{
"start": 175,
"end": 197,
"text": "Adhikari et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 345,
"end": 370,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 553,
"end": 571,
"text": "(Wan et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weight-dropped LSTM",
"sec_num": "4.4"
},
{
"text": "Reinforcement learning is applied to the best performing encoder in Section 4 to prune noisy text, which can lead to comparable or even better performance, as many text segments in these clinical notes are found to be irrelevant to this task. Figure 1 describes the overview of our reinforcement learning approach. The pruning process is modeled as a sequential decision problem, for the fact that these notes are received in time-order. It consists of two separate components: a policy network, and a downstream classifier. To avoid having too many time steps, the policy is performed on the segment level instead of token level. For each patient, documents are split into short segments g 1:T = {g 1 , g 2 , \u2022 \u2022 \u2022 , g T }, and the policy network conducts a sequence of decisions a 1:",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 249,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "T = {a 1 , a 2 , \u2022 \u2022 \u2022 , a T } over segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "The downstream classifier is re-sponsible for the reward, and the REINFORCE algorithm is used to train the policy (Williams, 1992) .",
"cite_spans": [
{
"start": 114,
"end": 130,
"text": "(Williams, 1992)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "State At each time step, the state s t is the concatenation of two parts: the representation of previously selected text, and the current segment representation g i . The previously selected text serves as the context and provides a prior importance. Both parts are represented by an effective encoder, e.g. the best performing encoder from Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "Action The action space at each step is binary: {Keep, Prune}. If the action is Keep, the current segment is added to the selected text; otherwise, it is discarded. The final selected text for a patient is the concatenated segments selected by the policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "Reward The reward comes at the end when all actions are sampled for the entire sequence. The final selected text is fed to the downstream classifier, and negative log-likelihood of the gold label is used as the reward R. In addition, we also include a reward term R p to encourage pruning, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R p = c \u2022 \u03b1 \u2022 [2\u03c3( l \u03b2 ) \u2212 1]",
"eq_num": "(4)"
}
],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "where c and \u03b2 are hyperparameters to control the scale of R p , l is the number of segments, \u03b1 is the ratio of pruned segments |{a k = Prune}| /l, \u03c3 is the sigmoid function. The value of the term 2\u03c3( l \u03b2 ) \u2212 1 falls into range (0, 1). When l is small, it downgrades the encouragement of pruning; when l is large, it also gives an upper bound of R p . Additionally, we apply exponential decay on the reward. The final reward is d l R + R p . d is the discount rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "Policy Network The policy network maintains a stochastic policy \u03c0(a t |s t ; \u03b8):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0(a t |s t ; \u03b8) = \u03c3(W s t + b)",
"eq_num": "(5)"
}
],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "where \u03b8 is the set of policy parameters W and b, a t and s t are the action and state at the time step t respectively. During training, an action is sampled at each step with the probability from the policy. After the sampling is performed over the entire sequence, the delayed reward is computed. During evaluation, the action is picked by argmax a \u03c0(a|s t ; \u03b8). The training is guided by the REINFORCE algorithm (Williams, 1992) , which optimizes the policy to maximize the expected reward:",
"cite_spans": [
{
"start": 414,
"end": 430,
"text": "(Williams, 1992)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(\u03b8) = E a 1:T \u223c\u03c0 R a 1:T",
"eq_num": "(6)"
}
],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "and the gradient has the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2207 \u03b8 J(\u03b8) = E \u03c4 T t=1 \u2207 \u03b8 log \u03c0(a t |s t ; \u03b8)R \u03c4 (7) \u2248 1 N N i=1 T t=1 \u2207 \u03b8 log \u03c0(a it |s it ; \u03b8)R \u03c4 i",
"eq_num": "(8)"
}
],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "where \u03c4 represents the sampled trajectory {a 1 , a 2 , \u2022 \u2022 \u2022 , a T }, N is the number of sampled trajectories. R \u03c4 i here equals the delayed reward from the downstream classifier at the last step. To encourage exploration and avoid local optima, we add the entropy regularization (Mnih et al., 2016) on the policy loss:",
"cite_spans": [
{
"start": 280,
"end": 299,
"text": "(Mnih et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "J reg (\u03b8) = \u03bb N N i=1 1 T i \u2207 \u03b8 H(\u03c0(s it ; \u03b8)) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "where H is the entropy, and \u03bb is the regularization strength, T i is the trajectory length. Finally, the downstream classifier and policy network are warm-started by separate training, and then jointly trained together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "5"
},
{
"text": "Before experiments, we perform the preprocessing described in Section 2.1, and then randomly split patients in every note type by 5 folds to perform cross-validation as suggested by Shin et al. (2019) . To evaluate each fold F i , 12.5% of the training set, that is the combined data of the other 4 folds, are held out as the development set and the best configuration from this development set is used to decode F i . The same split is used across all experiments for fair comparison. Following Shin et al. (2019) , the averaged Area Under the Curve (AUC) across these 5 folds is used as the evaluation metric.",
"cite_spans": [
{
"start": 182,
"end": 200,
"text": "Shin et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 496,
"end": 514,
"text": "Shin et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Bag-of-Words We first conduct experiments using the bag-of-words encoder (BoW; Section 4.1) to establish the baseline. Many experiments are performed on all note types using the vanilla TF-IDF, document frequency (DF) cutoff at 2 (removing all tokens whose DF \u2264 2), and token stemming. For every experiment, the class weight is assigned inversely proportional to class frequencies, and the inverse of regularization strength C is searched from {0.01, 0.1, 1, 10}, where the best results are achieved with C = 1 on the development set. Table 3 describes the cross-validation results on every note type. The top AUC is 62.3%, which is within expectation given the difficulty of this task. Some note types are not as predictive as the others, such as Operative (OP) and Social Worker (SW), with the AUC under 52%. Most note types have the standard deviations in range 0.02 to 0.03.",
"cite_spans": [],
"ref_spans": [
{
"start": 535,
"end": 542,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.1"
},
{
"text": "In comparison to the previous work (Shin et al., 2019) , we achieve 0.671 AUC combining both structured and unstructured data, despite without the use of LDA in our encoder.",
"cite_spans": [
{
"start": 35,
"end": 54,
"text": "(Shin et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.1"
},
{
"text": "Noise Observation The DF cutoff coupled with token stemming significantly reduce feature space for the BoW model. As shown in Table 4 , the DF cutoff itself can achieve about 50% reduction of the feature space. Furthermore, applying the DF cutoff leads to slightly higher AUCs on most of the note types, despite almost a half of the tokens are removed from the vocabulary. This implies that there exists a large amount of noisy text that appears only in few documents, causing the models to be overfitted more easily. These results further verify our previous observation and strengthen the necessity to extract noise from these long documents using reinforcement learning (Section 6.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.1"
},
{
"text": "Averaged Word Embedding For the averaged word embedding encoder (AWE; Section 4.2), embeddings generated by FastText trained on the Common Crawl and the English Wikipedia with the 300 dimension is used. 1 AWE is outperformed by BoW on every note type except Operative (OP; Table 3 ). This empirical result implies that AWE over thousands of tokens is not so effective in generating the document representation so that the averaged embeddings are less discriminative than the sparse vectors generated by BoW for such long documents. ",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.1"
},
{
"text": "For deep learning encoders, the four note types with good baseline performance (\u2248 60% AUC) and reasonable sequence length (< 5000) are selected to use in the following experiments, which are Consultations (CO), Discharge Summary (DS), History and Physical (HP), and Selection Conference (SC) (see Tables 1 and 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 312,
"text": "Tables 1 and 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Deep Learning-based Encoders",
"sec_num": "6.2"
},
{
"text": "Segmentation For both ClinicalBERT and the LSTM models, the input document is split into segments as described in Section 4.3. For LSTM, we set the maximum segment length to be 128 for CO and HP, 64 for DS and SC, to balance between segment length and sequence length. The segment length for ClinicalBERT is set to 318 (approaching 500 after BERT tokenization) to avoid noise brought by too many pseudo labels. More statistics about segmentation are summarized in Table 5. 1 https://fasttext.cc/docs/en/crawl-vectors.html",
"cite_spans": [],
"ref_spans": [
{
"start": 464,
"end": 472,
"text": "Table 5.",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Deep Learning-based Encoders",
"sec_num": "6.2"
},
{
"text": "For the ClinicalBERT, we use the PyTorch BERT implementation with the base configuration: 2 768 embedding dimensions and 12 transformer layers, and we load the weights provided by Huang et al. (2019) whose language model has been finetuned on large-scale clinical notes. 3 We finetune the entire ClinicalBERT with batch size 4, learning rate 2 \u00d7 10 \u22125 , and weight decay rate 0.01.",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "Huang et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 271,
"end": 272,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Learning-based Encoders",
"sec_num": "6.2"
},
{
"text": "For the weight-dropped LSTM, we set the batch size to 64, the learning rate to 10 \u22123 , the weightdrop rate to 0.5, and search the hidden state dimension from {128, 256, 512} on the development set. Early stop is used for both approaches. Table 3 shows the final results achieved by the ClinicalBERT and LSTM models. The AUCs of both models experience a non-trivial drop from the baseline. After further investigation, the issue is that both models suffer from severe overfitting under the huge feature spaces, and struggle to learn generalized decision boundaries from this data. Figure 2 shows an example of the weak correlation between the training loss and the AUC scores on the development set.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 580,
"end": 588,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Deep Learning-based Encoders",
"sec_num": "6.2"
},
{
"text": "As more steps are processed, the training loss gradually decreases to 0. However, the model has high variance and it does not necessarily give better performance on the development set as the training loss drops. This issue is more apparent with Clini-calBERT on CO because there are too many pseudo labels acting as noise, which makes it harder for the model to distinguish useful patterns from noise. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": null
},
{
"text": "According to Table 3 , the BoW model achieves the best performance. Therefore, we decide to use TF-IDF to represent the long text of each patient, along with logistic regression as the classifier for reinforcement learning. Document segmentation is the same as LSTM (Table 5 ). During training, segments within each note are shuffled to reduce overfitting risks, and sequences with more than 36 segments are truncated. The downstream classifier is warm-started by loading weights from the logistic regression model in the previous experiment. The policy network is then trained for 400 episodes while freezing the downstream classifier. After the warm start, both models are jointly trained. We set the number of sampling N as 10 episodes, learning rate 2 \u00d7 10 \u22124 , and fix the scaling factor \u03b2 in Equations 4 as 8, and discount rate as 0.95. Moreover, we search the reward coefficient c in {0.02, 0.1, 0.4}, and entropy coefficient \u03bb in {2, 4, 6, 8}. Table 6 : The AUC scores and the pruning ratios of reinforcement learning (RL). Best: AUC scores from the best performing models in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 266,
"end": 274,
"text": "(Table 5",
"ref_id": "TABREF7"
},
{
"start": 952,
"end": 959,
"text": "Table 6",
"ref_id": null
},
{
"start": 1084,
"end": 1091,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Reinforcement Learning",
"sec_num": "6.3"
},
{
"text": "The AUC scores and the pruning ratios (the number of pruned segments divided by the sequence length) are shown in Table 6 . Our reinforcement learning approach outperforms the best performing models in Table 3 , achieving around 1% higher AUC scores on three note types, CO, HP, and SC, while pruning out up to 26% of the input documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 6",
"ref_id": null
},
{
"start": 202,
"end": 209,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "Tuning Analysis We find that two hyperparameters are essential to the final success of reinforcement learning (RL). The first is the reward discount rate d. The scale of the policy gradient \u2207 \u03b8 J(\u03b8) depends on the sequence length T , while the delayed reward R \u03c4 is always on the same scale regardless of T . Therefore, different sequence length across episodes causes turbulence on the policy gradient, leading to unstable training. It is important to apply reward decay to stabilize the scale of \u2207 \u03b8 J(\u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "The second is the entropy regularization coefficient \u03bb, which forces the model to add bias towards uncertainty. Without strong entropy regularization, the training is easy to fall into local optima in early stage, which is to keep all segments, as shown by Figure 3(a) . \u03bb = 6 gives the model descent incentive to explore aggressively, as shown by Figure 3 (b), and finally leads to higher AUC. ",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 268,
"text": "Figure 3(a)",
"ref_id": "FIGREF2"
},
{
"start": 348,
"end": 357,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "CO",
"sec_num": null
},
{
"text": "To investigate the noise extracted by RL, we analyze the pruned segments on the validation sets of the Consultations type (CO), and compare the results with other basic noise removal techniques. Table 7 demonstrates the potential of the learned policy to automatically identify noisy text from the long documents. The original notes of shown examples are tabular text with headers and values, mostly lab results and medical prescription. After the data cleaning step, the text becomes broken and does not make much sense for humans to evaluate. The learned policy can identify noisy segments by looking at the presence of headers such as \"lab fishbone\", \"lab report\", and certain medical terms that frequently appear in tabular reports such as \"chloride\", \"creatinine\", \"hemoglobin\", \"methylprednisolone\", etc. We find that many pruned segments have strong indicators lab fishbone ( bmp , cbc , cmp , diff ) and critical labs -last hours ( not an official lab report . please see flowsheet ( or printed official lab reports ) for official lab results . ) ( na ) ( cl ) h ( bun ) -( hgb ) ( glu ) ( wbc ) ( plt ) ( ) h ( cr ) ( hct ) na = not applicable a = abnormal ( ftn ) = footnote . laboratory studies : sodium , potassium , chloride , . , bun , creatinine , glucose . total bilirubin 1 , phos of , calcium , ast 9 , alt , alk phos . parathyroid hormone level . white blood cell count , hemoglobin , hematocrit , platelets . inr , ptt , and pt . methylprednisolone ivpb : mg , ivpb , give in surgery , routine , / , infuse over : minute . mycophenolate mofetil : mg = 4 cap , po , capsule , once , now , / , stop date / , ml . documented medications documented accupril : mg , po , qday , 0 refill , substitution allowed . the social worker met with this pleasant year old caucasian male on this date for kidney transplant evaluation . the patient was alert , oriented and easily engaged in conversation with the social worker today . he resides in atlanta with his spouse of years , who he describes as very supportive . he reports occasional alcohol drinks per month but denies any illicit drug use . he has a grade education . he has been married for years . he is working full -time while on peritoneal dialysis as a business asset manager . he has medicare and an aarp prescriptions supplement . family history : mother deceased at age with complications of obesity , high blood pressure and heart disease . of headers and specific medical terms, which appear mostly in tabular text rather than written notes. Table 8 shows examples that are kept by the policy. Tokens that contribute towards Keep action are words related with human and social life, such as \"social worker\", \"engaged\", \"drinks\", \"married\", \"medicare\", and terms related with health conditions, such as \"obesity\", \"heart\", \"high blood pressure\". These terms indeed appear mostly in written text rather than tabular data.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 2519,
"end": 2526,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Noise Analysis",
"sec_num": "7"
},
{
"text": "In addition, we also notice that the policy is able to remove certain duplicate segments. Medical professionals sometimes repeat certain description from previous notes to a new document, causing duplicate content. The policy learns to make use of the already selected context, and assigns negative coefficients to certain tokens. Duplicate segments are only selected once if the segment contains many tokens that have opposite feature importance in the context and segment vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": null
},
{
"text": "Quantitative Analysis We examine tokens that are pruned by RL and compare with document frequency (DF) cutoff. We select 3000 unique tokens in the vocabulary that have the top negative feature importance (towards Prune action) in the segment vector of CO. Figure 4 shows the DF distribution of these tokens. We observe that the majority of those tokens have small DF values. It shows that the learned policy is able to identify certain tokens with small DF values as noise, which aligns with DF cutoff. Moreover, the distribution also shows a non-trivial amount of tokens with large DF values, demonstrating that RL can also identify task-specific noisy tokens that commonly appear in documents, which in this case are certain tokens in noisy tabular text.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": null
},
{
"text": "Either RL or DF cutoff achieves higher AUC while reducing input features, proving that given the small sample size, the extracted text is more likely to cause overfit than being generalizable pattern, which also verifies our initial hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": null
},
{
"text": "In this paper, we address the task of 30-day readmission prediction after kidney transplant, and propose to improve the performance by applying reinforcement learning with noise extraction capability. To overcome the challenge of long document representation with a small dataset, four different encoders are experimented. Empirical results show that bagof-words is the most suitable encoder, surpassing overfitted deep learning models, and reinforcement learning is able to improve the performance, while being able to identify both traditional noisy tokens that appear in few documents, and task-specific noisy text that commonly appear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://github.com/huggingface/transformers 3 https://github.com/kexinhuang12345/clinicalBERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the support of the National Institutes of Health grant R01MD011682, Reducing Disparities among Kidney Transplant Recipients. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Institutes of Health.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rethinking complex neural network architectures for document classification",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Adhikari",
"suffix": ""
},
{
"first": "Achyudh",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4046--4051",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1408"
]
},
"num": null,
"urls": [],
"raw_text": "Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking complex neural net- work architectures for document classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4046-4051, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dynamic hierarchical classification for patient risk-of-readmission",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Senjuti Basu Roy",
"suffix": ""
},
{
"first": "Kiyana",
"middle": [],
"last": "Teredesai",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zolfaghar",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Stacey",
"middle": [],
"last": "Hazel",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marinez",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15",
"volume": "",
"issue": "",
"pages": "1691--1700",
"other_ids": {
"DOI": [
"10.1145/2783258.2788585"
]
},
"num": null,
"urls": [],
"raw_text": "Senjuti Basu Roy, Ankur Teredesai, Kiyana Zolfaghar, Rui Liu, David Hazel, Stacey Newman, and Albert Marinez. 2015. Dynamic hierarchical classification for patient risk-of-readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15, page 1691-1700, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission",
"authors": [
{
"first": "Kexin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jaan",
"middle": [],
"last": "Altosaar",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Ranganath",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ran- ganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. CoRR, abs/1904.05342.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mimiciii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "H Lehman",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Li-Wei",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific data",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic- iii, a freely accessible critical care database. Scien- tific data, 3:160035.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Transitional care interventions and hospital readmissions in surgical populations: A systematic review",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Hollis",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Wahl",
"suffix": ""
},
{
"first": "Brad",
"middle": [],
"last": "Oriel",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Itani",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Hawn",
"suffix": ""
}
],
"year": 2016,
"venue": "The American Journal of Surgery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.amjsurg.2016.04.004"
]
},
"num": null,
"urls": [],
"raw_text": "Caroline Jones, Robert Hollis, Tyler Wahl, Brad Oriel, Kamal Itani, Melanie Morris, and Mary Hawn. 2016. Transitional care interventions and hospital readmis- sions in surgical populations: A systematic review. The American Journal of Surgery, 212.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Regularizing and optimizing LSTM language models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In International Conference on Learning Representations.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Asynchronous methods for deep reinforcement learning",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Adria",
"middle": [
"Puigdomenech"
],
"last": "Badia",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Harley",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 33rd International Conference on Machine Learning",
"volume": "48",
"issue": "",
"pages": "1928--1937",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asyn- chronous methods for deep reinforcement learning. In Proceedings of The 33rd International Confer- ence on Machine Learning, volume 48 of Proceed- ings of Machine Learning Research, pages 1928- 1937, New York, New York, USA. PMLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stop word lists in free open-source software packages",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Hanmin",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yurchak",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2502"
]
},
"num": null,
"urls": [],
"raw_text": "Joel Nothman, Hanmin Qin, and Roman Yurchak. 2018. Stop word lists in free open-source software packages. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 7-12, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Robust distant supervision relation extraction via deep reinforcement learning",
"authors": [
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2137--2147",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1199"
]
},
"num": null,
"urls": [],
"raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extrac- tion via deep reinforcement learning. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2137-2147, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multimodal ensemble approach to incorporate various types of clinical notes for predicting readmission",
"authors": [
{
"first": "Bonggun",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"B"
],
"last": "Adams",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Lynch",
"suffix": ""
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonggun Shin, Julien Hogan, Andrew B. Adams, Ray- mond J. Lynch, and Jinho D. Choi. 2019. Mul- timodal ensemble approach to incorporate various types of clinical notes for predicting readmission. In 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "56",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(56):1929-1958.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Regularization of neural networks using dropconnect",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Zeiler",
"suffix": ""
},
{
"first": "Sixin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 30th International Conference on International Conference on Machine Learning",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on International Con- ference on Machine Learning -Volume 28, ICML'13, page III-1058-III-1066. JMLR.org.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Mach. Learn",
"volume": "8",
"issue": "3-4",
"pages": "229--256",
"other_ids": {
"DOI": [
"10.1007/BF00992696"
]
},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Mach. Learn., 8(3-4):229-256.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning structured representation for text classification via reinforcement learning",
"authors": [
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyang Zhang, Minlie Huang, and Li Zhao. 2018. Learning structured representation for text classifica- tion via reinforcement learning. In AAAI Conference on Artificial Intelligence.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Training loss and AUC scores on the development set during the LSTM training on the CO type. The AUC scores depict high variance while showing weak correlation to the training loss.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Retaining ratios on the development set of SC while training the reinforcement learning model. Entropy regularization encourages more exploration.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Log scale distribution on document frequency of tokens with top negative feature importance.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "An example of tabular text in EKTD.",
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>Encoder</td><td>CO</td><td>DS</td><td>EC</td><td>HP</td><td>OP</td><td>PG</td><td>SC</td><td>SW</td></tr><tr><td>Bag-of-ClinicalBERT ( \u00a74.3)</td><td colspan=\"2\">51.9 53.3</td><td>-</td><td>52.7</td><td>-</td><td>-</td><td>52.3</td><td>-</td></tr><tr><td colspan=\"3\">Weight-dropped LSTM ( \u00a74.4) 53.7 55.8</td><td>-</td><td>54.2</td><td>-</td><td>-</td><td>54.5</td><td>-</td></tr></table>",
"num": null,
"text": "Words ( \u00a74.1) 58.6 62.1 52.0 58.9 51.8 61.2 59.3 51.6 + Cutoff 58.6 62.3 52.8 59.0 51.9 61.3 59.3 51.9 + Stemming 58.9 61.8 53.4 59.4 51.9 61.5 59.3 51.6 Averaged Embedding ( \u00a74.2) 56.3 53.7 52.4 54.0 53.4 54.7 54.2 46.6",
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table/>",
"num": null,
"text": "The Area Under the Curve (AUC) scores achieved by different encoders on the 5-fold cross-validation. See the caption inTable 1for the descriptions of CO, DS, EC, HP, OP, PG, SC, and SW. For deep learning encoders, only four types are selected in experiments (Section 6.2).",
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table/>",
"num": null,
"text": "The dimensions of the feature spaces used by each BoW model with respect to the four note types. The numbers in the parentheses indicate the percentage reduction from the vanilla model, respectively.",
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td>: SEN: maximum segment length (number of</td></tr><tr><td>tokens) allowed by the corresponding model, SEQ: av-</td></tr><tr><td>erage sequence length (number of segments), INST: av-</td></tr><tr><td>erage number of samples in the training set.</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF9": {
"content": "<table/>",
"num": null,
"text": "Examples of pruned segments by the learned policy. Tokens that have feature importance lower than \u22120.001 (towards Prune action) are marked bold.",
"type_str": "table",
"html": null
},
"TABREF10": {
"content": "<table/>",
"num": null,
"text": "Examples of kept segments by the learned policy. Tokens that have feature importance greater than 0.0005 (towards Keep action) are marked bold.",
"type_str": "table",
"html": null
}
}
}
}