ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-demos.33.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:50:25.355527Z"
},
"title": "Clinical-Coder: Assigning Interpretable ICD-10 Codes to Chinese Clinical Notes",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "pengfei.cao@nlpr.ia.ac.cn"
},
{
"first": "Chenwei",
"middle": [],
"last": "Yan",
"suffix": "",
"affiliation": {
"laboratory": "Key Laboratory of Trustworthy Distributed Computing and Service(BUPT)",
"institution": "Beijing University of Posts and Telecommunications",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Xiangling",
"middle": [],
"last": "Fu",
"suffix": "",
"affiliation": {
"laboratory": "Key Laboratory of Trustworthy Distributed Computing and Service(BUPT)",
"institution": "Beijing University of Posts and Telecommunications",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "fuxiangling@bupt.edu.cn"
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "yubo.chen@nlpr.ia.ac.cn"
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "kliu@nlpr.ia.ac.cn"
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "jzhao@nlpr.ia.ac.cn"
},
{
"first": "Shengping",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Unisound Information Technology Co",
"location": {
"settlement": "Ltd, Beijing",
"country": "China"
}
},
"email": "liushengping@unisound.com"
},
{
"first": "Weifeng",
"middle": [],
"last": "Chong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing Unisound Information Technology Co",
"location": {
"settlement": "Ltd, Beijing",
"country": "China"
}
},
"email": "chongweifeng@unisound.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we introduce Clinical-Coder, an online system aiming to assign ICD codes to Chinese clinical notes. ICD coding has been a research hotspot of clinical medicine, but the interpretability of prediction hinders its practical application. We exploit a Dilated Convolutional Attention network with N-gram Matching Mechanism (DCANM) to capture semantic features for non-continuous words and continuous n-gram words, concentrating on explaining the reason why each ICD code to be predicted. The experiments demonstrate that our approach is effective and that our system is able to provide supporting information in clinical decision making.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we introduce Clinical-Coder, an online system aiming to assign ICD codes to Chinese clinical notes. ICD coding has been a research hotspot of clinical medicine, but the interpretability of prediction hinders its practical application. We exploit a Dilated Convolutional Attention network with N-gram Matching Mechanism (DCANM) to capture semantic features for non-continuous words and continuous n-gram words, concentrating on explaining the reason why each ICD code to be predicted. The experiments demonstrate that our approach is effective and that our system is able to provide supporting information in clinical decision making.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "International Classification of Disease (ICD) is the diagnostic classification standard in the field of clinical medicine, which assigns unique code to each disease. The popularization of ICD codes immensely promotes the information sharing and clinical research of disease worldwide and has a positive influence on health condition research, insurance claims, morbidity and mortality statistics (Shi et al., 2017) . Therefore, ICD coding -which assigns proper ICD codes to a clinical note -has drawn much attention.",
"cite_spans": [
{
"start": 396,
"end": 414,
"text": "(Shi et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is always that ICD coding relies on the manual work of professional staff. The manual coding is very error-prone and time-consuming since the continuous updating version of ICD codes results in a substantial increase in code numbers. The number of ICD-10 codes is up to 72,184, more than five times the previous version (i.e., . It allows for more detailed classifications of patients' conditions, injuries, and diseases. However, there * co-first authors, they contributed equally to this work is no doubt that the increased granularity increases the difficulty of manual coding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing studies came up with several approaches of automatic coding prediction to replace the repetitive manual work, from the traditional machine learning methods (Perotte et al., 2013; Koopman et al., 2015) , to neural network methods (Shi et al., 2017; Yu et al., 2019) . Although these methods achieve great success, they are still confronted with a critical challenge, which is the interpretability of predicted codes. Explainable model and results are essential for clinical medicine decision making (Mullenbach et al., 2018) . Thus, the practical approach is supposed to predict correct codes and simultaneously give the reason why each code is predicted. In this paper, we try to provide the interpretability of predictions from a semantic perspective. It is a phenomenon that the exact disease names or similar expressions of disease names often appear in the discharge summary. For example, as shown in Figure 1, the exact matching with disease name such as \"fatty liver\" is a direct evidence of inference. We call the continuous consistent words as explicit semantic features. Moreover, the inexact matching such as \"rheumatoid multisite arthritis\" is also very useful to predict the codes and should be taken into consideration. We refer to the non-continuous Figure 2 : The screenshot of Clinical-Coder system, the English version can be found in the appendix A. (a) gives the predicted diseases after users enter the clinical notes which contains four parts, admission situation, admission diagnosis, discharge situation and discharge diagnosis. (b1) and (b2) are the visualization of supporting information for predictions. words as implicit semantic features. The two kinds of semantic features are both clues to explain the reason why to assign each code, which is also the basis of experts in manual coding process. To capture the two semantic phenomena, we exploit dilated convolution and n-gram matching mechanism to extract implicit semantic features and explicit semantic features, respectively. Furthermore, we develop a system to assist the professional coders in assigning the correct codes. In summary, the main contributions are as follows:",
"cite_spans": [
{
"start": 165,
"end": 187,
"text": "(Perotte et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 188,
"end": 209,
"text": "Koopman et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 238,
"end": 256,
"text": "(Shi et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 257,
"end": 273,
"text": "Yu et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 507,
"end": 532,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 914,
"end": 920,
"text": "Figure",
"ref_id": null
},
{
"start": 1273,
"end": 1281,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We collect a large-scale Chinese Clinical notes dataset, making up for the lack of Chinese ICD coding corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel method to simultaneously capture implicit and explicit semantic features, which enables to give interpretability for each predicted code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We develop an open-access online system, called clinical-coder, that automatically assigns codes to the free-text clinical notes with an indication of the supporting information for each code to be predicted. It uses vivid visualization to provide interpretability of prediction for each ICD code. The site can be accessed by http://159.226. 21.226/disease-prediction, and instructions video is provided at https://youtu. be/U4TImTwEysE. Automatic ICD coding has recently been a research hotspot in the field of clinical medicine, where neural network architecture methods show promising results than traditional machine learning methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most studies treat automatic ICD coding as a multi-label classification problem and use only the free-text in summaries to predict codes (Subotin and Davis, 2015; Kavuluru et al., 2015; Yu et al., 2019) , while many methods benefit from extra information. Shi et al. (2017) encode label description with character-level and word-level long shortterm memory network. Rios and Kavuluru (2018) encode label description with averaging words embedding. Furthermore, adversarial learning is employed to unify writing styles of diagnosis descriptions and ICD code descriptions (Xie et al., 2018) . Besides code descriptions, Wikipedia comes to be regarded as an external knowledge source (Prakash et al., 2017; Bai and Vucetic, 2019) .",
"cite_spans": [
{
"start": 137,
"end": 162,
"text": "(Subotin and Davis, 2015;",
"ref_id": "BIBREF15"
},
{
"start": 163,
"end": 185,
"text": "Kavuluru et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 186,
"end": 202,
"text": "Yu et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 256,
"end": 273,
"text": "Shi et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 366,
"end": 390,
"text": "Rios and Kavuluru (2018)",
"ref_id": "BIBREF13"
},
{
"start": 570,
"end": 588,
"text": "(Xie et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 681,
"end": 703,
"text": "(Prakash et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 704,
"end": 726,
"text": "Bai and Vucetic, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Additionally, inferring interpretability is a crucial challenge and obstacle for practical automatic coding, since professionals are willing to be con- vinced by the model insights of vital supporting information or decision-making process (Vani et al., 2017; Mullenbach et al., 2018) . Baumel et al. (2018) employ bidirectional Gated Recurrent Unit with sentence-level attention to obtain relevant sentences for each code. Mullenbach et al. (2018) use attention at the word level, which is more finegrained. Our work is inspired by (Mullenbach et al., 2018) , assigning the importance value for each label to the discharge summaries to assists in explaining the model prediction process.",
"cite_spans": [
{
"start": 240,
"end": 259,
"text": "(Vani et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 260,
"end": 284,
"text": "Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 287,
"end": 307,
"text": "Baumel et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 424,
"end": 448,
"text": "Mullenbach et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 533,
"end": 558,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dilated convolution is designed for image classification to aggregate multi-scale contextual information without losing resolution in computer vision (Yu and Koltun, 2016) . It inserts \"holes\" in the standard convolution map to increase the reception field. The hole-structure brings a breakthrough improvement to the semantic segmentation task.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Yu and Koltun, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dilated Convolution",
"sec_num": "2.2"
},
{
"text": "Similarly, several hole-structured convolution neural networks (CNNs) (Lei et al., 2015; Guo et al., 2017) are designed to handle natural language processing tasks. In the text, there exists noncontinuous semantic where useless information may be interspersed among the sentences. Holes in the dilated convolution can ignore the extra word between the non-continuous words and well adapt to match non-continuous semantic. Since the semantic infomation is crutial when understanding natural language (Zuo et al., 2019) , we apply the dilated convolution to encode the text, capturing the non-continuous semantic information.",
"cite_spans": [
{
"start": 70,
"end": 88,
"text": "(Lei et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 89,
"end": 106,
"text": "Guo et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 499,
"end": 517,
"text": "(Zuo et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dilated Convolution",
"sec_num": "2.2"
},
{
"text": "We propose a Dilated Convolutional Attention network with N-gram Matching Mechanism (DCANM) for ICD coding task. Figure 3 describes the architecture of the model. The input of model is all sentences in clinical notes, which are spliced together. The input sentences interact with ICD code names to capture explicit semantic features and generate an n-gram matrix. At the same time, the input sentences are transformed into vector and processed by dilated CNN to capture implicit semantic features. Attention mechanism is used to improve the performance. Then all features are concatenated to form the final features. Finally, we use a sigmoid classifier to predict the probability of each code. Next, we give the detailed descriptions.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 121,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "Word Embedding. Word embedding is a lowdimensional vector representation of a word. We use the pre-trained embedded matrix W wrd \u2208 R d w \u00d7|V | , where d w is the dimension of word embedding and |V | is the size of vocabulary. Given a sentence,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "S = [w 1 , w 2 , ..., w N ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where N is the number of words in the sentence, we can get the word embedding by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w e = W wrd v i ,",
"eq_num": "(1)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where v i is the one-hot representation of the current word in the corresponding column of W wrd .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "Explicit Semantic Features. N-gram matching mechanism is applied to capture explicit semantic features. We use disease names (D) to sampling on the text (T). First, move the sliding window on the disease name d l \u2208 D to get a n-gram substring. Then, calculate the frequency of each n-gram substring in the free-text. The sum of frequencies of gram with same length n (denoted as gram n ) has reflected the emergence of disease names in the text, nevertheless some grams have their unique particularity. For example, given a 2-gram string, \"\u7cd6\u5c3f\" (Diabetes) is more representative than \"\u6162 \u6027\"(Chronic) though they have the same length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "To represent the degree of importance of different n-gram, each n-gram is given a term frequencyinverse document frequency (tf-idf) weight. Finally, for each free-text clinical note, we calculate an explicit semantic n-gram matrix (M ) with size of L \u00d7 W , where L is numbers of labels and W is the numbers of sliding windows. For example, we have four sliding windows which lengths are 2, 3, 4, 5, so W is 4. For the l-th row the w-th column item in the feature map, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m l,w = Lgram ln i=1 count gram lni * tf idf gram lni (2) tf idf gram i = n L n l * L f req gram lni ,",
"eq_num": "(3)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where w is the index of n-length sliding window, gram ln is all n-length substrings of the l-th disease name, gram lni is the i-th gram ln , L gram ln is the number of gram ln , count gram lni is the frequencies of gram lni in the text, L n l is the length of the l-th disease name, f req gram lni is the frequencies of gram lni in all disease names. In this calculation, we can distinguish the importance degree of n-gram substring. It also works on English clinical notes, for instance, in a specific case from MIMIC-III (Johnson et al., 2016) , the tf-idf value of \"history of\" is 1.79 while \"atrial fibrillation\" is 9.32 because \"history of\" appears 249 times in all ICD disease names and \"atrial fibrillation\" only appears two times. The higher the value is, the more representative the word is. Therefore \"atrial fibrillation\" is more likely to indicate a disease than \"history of\". Implicit Semantic Features. Dilated convolution is applied to capture implicit semantic features. For a long clinical text, dilated convolution extends the reception field in the situation of not using pooling operation so that every kernel has a wider range of information. More importantly, it has \"holes\" in convolution map, which means it can be adapted to match the non-continuous semantic information. For example, \"\u7c7b\u98ce\u6e7f\u6027\u591a\u90e8\u4f4d\u5173\u8282 \u708e\"(Rheumatoid multisite arthritis) in the clinical notes refers to \"\u7c7b\u98ce\u6e7f\u6027\u5173\u8282\u708e\"(Rheumatoid arthritis) in ICD, the convolution map with holes can tolerate the redundant parts, as shown in Figure 4 . It is a distinct advantage of dilated convolution for processing texts. Formally, the actual filter width of dilated convolutional neural network is computed as,",
"cite_spans": [
{
"start": 523,
"end": 545,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1505,
"end": 1514,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k d = r(k \u2212 1) + 1,",
"eq_num": "(4)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where r \u2208 [1, 2, 3, ...] is the dilated rate, k is the origin filter width. For each step n, the typical convolution is computed as formula 5 and dilated convolution is computed as formula 6. The dilated CNN is same as typical CNN when the dilated rate is 1, since k d equals to k when r = 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h n = tanh(W c * x n:n+k\u22121 + b c )",
"eq_num": "(5)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h n = tanh(W c * x n:n+k d \u22121 + b c ),",
"eq_num": "(6)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where W c \u2208 R k d \u00d7de\u00d7dc is the convolutional filter map, k d is the actual filter width, d e is the size of the word embedding, an d c the size of the filter output and b c \u2208 R dc is the bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "Attention. After convolution, the sentence is represented as H \u2208 R dc\u00d7N . We employ the per-label attention mechanism (Mullenbach et al., 2018) to find the most contributed characters for each label. For each label l, the distributed attention weight is computed as:",
"cite_spans": [
{
"start": 118,
"end": 143,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 l = Sof tM ax(H T u l ),",
"eq_num": "(7)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where u l \u2208 R dc is the vector representation of label l. Finally, the sentence is represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "m l = H\u03b1 l (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "We employ attention both for typical CNN and dilated CNN, for convenience of distinction, we denote them as m l and m l , respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "Classification. m l and m l is concatenated with the linear transformed n-gram matrix horizontally. The aim of this step is to combining all the features together. Then we exploit sigmoid classifier and the prediction of label i is computed as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = \u03c3(W T [m l ; m l ; m l ] + b),",
"eq_num": "(9)"
}
],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "i \u2208 [1, 2, ..., L], W \u2208 R 3dc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": ", b is the bias, m l is the linear projection of n-gram matrix(M ). The loss function is the multi-label binary crossentropy (Nam et al., 2013) .",
"cite_spans": [
{
"start": 125,
"end": 143,
"text": "(Nam et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "L = L i=1 [\u2212y i log(\u0177 i )\u2212(1\u2212y i )log(1\u2212\u0177 i )], (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "where y i \u2208 {0, 1} is the ground truth for the i-th label and\u0177 i is the sigmoid score for the i-th label. Figure 2 illustrates the user interface of our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3.1"
},
{
"text": "User Input. The left of Figure 2(a) displays the user input. The user enters the whole free clinical note, which includes at least one from admission situation, admission diagnosis, discharge situation, and discharge diagnosis into the input box.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 35,
"text": "Figure 2(a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "User Interface",
"sec_num": "3.2"
},
{
"text": "Predicted Labels. The predicted labels are presented in the list of Figure 2(a) , including disease name and homologous ICD codes. The number of predicted codes are not always the same as the diseases in discharge diagnosis, because clinicians may leave out certain diseases and several diagnoses should be combined into one ICD code (Shi et al., 2017) . Our model can list all these diseases, and give the reason why they should be predicted.",
"cite_spans": [
{
"start": 334,
"end": 352,
"text": "(Shi et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 68,
"end": 79,
"text": "Figure 2(a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "User Interface",
"sec_num": "3.2"
},
{
"text": "Interpretability. Interpretability is a critical aspect of the decision-making system, especially in the clinical medicine domain. In our system, we give two ways, n-gram matching mechanism and attention, to assist users in understanding why each code is predicted. A user can know why the model predicted the labels, and what the key information in its decision was:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Interface",
"sec_num": "3.2"
},
{
"text": "(1) N-gram Matching Mechanism. When a patient suffering from a disease, the corresponding text span related to disease names often appear in the discharge summary. As shown in Figure 2 (b1) , the gram in disease name is highlighted to give a hint to users if it appears in the clinical text. Highlighting not only tells users why we predict each code but also prompts the place of the important information.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 189,
"text": "Figure 2 (b1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "User Interface",
"sec_num": "3.2"
},
{
"text": "(2) Attention. As shown in Figure 2 (b2) , the red background is attention distribution, and the darker the color is, the more useful the word is to predict the current label. The darker color is also helpful and attractive for human-being to doublecheck the correction of labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 40,
"text": "Figure 2 (b2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "User Interface",
"sec_num": "3.2"
},
{
"text": "We evaluate our model on both Chinese and English datasets. The Chinese dataset, collected by us, contains 50,678 Chinese clinical notes and 6,200 unique ICD-10 codes. For each clinical note, it contains five parts: admission situation, admission diagnosis, discharge situation, discharge diagnosis and annotated ICD-10 codes. Admission situation involves chief complaints, past medical history, etc. Discharge situation involves the results of general examination. Admission diagnosis and discharge diagnosis involve disease names, which may not be totally consistent with standard names in ICD-10. The manually annotated codes are based on ICD-10, which are tagged by professional coders after reading through the whole clinical note. The dataset (CN-Full) is formed with full labels mentioned above, and it is divided into train set and test set with the radio of 9:1. In addition, due to the phenomenon that massive codes are infrequent, and a small amount of codes are high-frequent, we reconstructed a sub-dataset (CN-50) with the most frequent 50 codes from the original dataset. The specific process is that filtering the origin train set and test set, and maintain the data which has at least one of the top 50 most frequent codes. To better compare with the previous works, we also evaluate our method on the MIMIC-III dataset (Johnson et al., 2016) , which is the most authoritative English dataset for evaluating the performance of automatic ICD coding approaches. The detailed description for these datasets is listed in Table 1 .",
"cite_spans": [
{
"start": 1337,
"end": 1359,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1534,
"end": 1541,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments 4.1 Dataset",
"sec_num": "4"
},
{
"text": "We splice the admission situation, admission diagnosis, discharge situation and discharge diagnosis together, which is the input of the model. The max length of the input is 1000. The word embedding is pre-trained using Word2Vec (Mikolov et al., 2013) with the dimensions of 100. The text is from all clinical notes. The batch size is 16. The dropout rate is 0.5. The optimizer is Adam (Kingma and Ba, 2015) with a learning rate of 0.0001.",
"cite_spans": [
{
"start": 229,
"end": 251,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocess and Parameters",
"sec_num": "4.2"
},
{
"text": "We use Micro-F1, Macro-F1, area under the ROC (Receiver Operating Characteristic) curve (AUC) and P@k as the metrics. P@k (Precision at k) is the fraction of the k highest-scored labels that are present in the ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocess and Parameters",
"sec_num": "4.2"
},
{
"text": "First, for the Chinese dataset (CN-Full and CN-50), CAML (Mullenbach et al., 2018) is set as our baseline, which use traditional convolutional attention network. Moreover, we test the dilated CNN and ngram matching mechanism separately. The results in Table 2 indicate that dilated CNN and n-gram matching mechanism both have a positive effect on improving performance from baseline, and the best results are obtained when they combined.",
"cite_spans": [
{
"start": 57,
"end": 82,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We also evaluate our method on English dataset (MIMIC-III-50). The results are shown in Table 3 . The CNN and Bi-GRU are the classic methods and the results are the same as (Mullenbach et al., 2018 ). Our proposed model achieves the Micro-F1 score of 0.641, which outperforms all previous works, more importantly providing interpretability.",
"cite_spans": [
{
"start": 174,
"end": 198,
"text": "(Mullenbach et al., 2018",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Besides, we notice that macro-F1 measure is always lower than micro-F1, especially in the full labels datasets. It means the smaller classes have (Shi et al., 2017) -0.532 -0.900 -HA-GRU (Baumel et al., 2018) -0.366 ---CAML (Mullenbach et al., 2018) 0.532 0.614 0.875 0.909 0.609 DR-CAML (Mullenbach et al., 2018) Table 3 : Evaluation on MIMIC-III-50 dataset poorer performance than larger classes, which is consistent with the facts. Either MIMIC-III or the Chinese dataset, the sample distributions are extremely imbalanced. Minority of codes are highly frequent, while most codes are infrequent. N-gram matching mechanism helps improve macro-F1 on CN-Full dataset obviously, reaching two times than baseline. It can be inferred that utilizing grams in disease names is useful for the smaller class.",
"cite_spans": [
{
"start": 146,
"end": 164,
"text": "(Shi et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 187,
"end": 208,
"text": "(Baumel et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 224,
"end": 249,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 288,
"end": 313,
"text": "(Mullenbach et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this paper, we propose a Dilated Convolutional Attention network with N-gram Matching Mechanism (DCANM) for automatic ICD coding. The dilated CNN, which is first applied to the ICD coding task, aims to capture semantic information for non-continuous words, and the n-gram matching mechanism aims to capture the continuous semantic. They both provide a pretty good interpretability for prediction. Moreover, we develop an openaccess system to help users assign ICD codes. We will try to utilize external resources to solve the few-shot and zero-shot problem in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work is supported by the National Natural Science Foundation of China (No.61922085, No.61533018, No.61976211) and the Key Research Program of the Chinese Academy of Sciences (Grant NO.ZDBS-SSW-JSC006). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and a grant from Ant Financial Services Group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The middle-aged man was admitted to the hospital because of a \"blood glucose increase of 5 years and poor glycemic control in June\". The patient's routine physical examination five years ago revealed an increase in blood glucose, about 9 mmol / L on an empty stomach, and the local diagnosis was: \"type 2 diabetes\". No obvious dry mouth, frequent drinking, polyuria and weight loss, no dizziness, no increased urine foam, no numbness in hands, feet, and occasionally blurred vision. Diet and exercise were used to control blood glucose. Normally, blood glucose was monitored irregularly, and fasting blood glucose fluctuated between 8-9mmol / L. During the course of the disease, the general condition of the patient is OK, the diet and sleep are OK, and there is no obvious abnormality in the stool. The weight loss in the last 2 months is about 3Kg. Auxiliary examination B: Ultralow hypoechoic nodules in the right lobe of the thyroid gland, considering glial cysts; bilateral carotid atherosclerosis with right plaque formation; fatty liver; right renal cyst with calcium milk; enlarged prostate; Atherosclerosis.Admission diagnosis:1. Type 2 diabetes 2. Kidney stones 3. Thyroid nodules 4. Atherosclerosis 5. Fatty liver",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Admissions situation:",
"sec_num": null
},
{
"text": "The patient had no dry mouth and frequent drinking, no polyuria, diet and sleep were OK, physical examination: clear mind, good spirits, slightly thicker breathing sounds in both lungs, and no wet and dry rales. Heart rhythm is uniform, and no noise is heard. The abdomen is flat, the whole abdomen is soft, no tenderness, no tenderness, no edema in both lower limbs. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discharge situation:",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improving medical code prediction from clinical text via incorporating online knowledge sources",
"authors": [
{
"first": "Tian",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Slobodan",
"middle": [],
"last": "Vucetic",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference, WWW'19",
"volume": "",
"issue": "",
"pages": "72--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tian Bai and Slobodan Vucetic. 2019. Improving med- ical code prediction from clinical text via incorporat- ing online knowledge sources. In The World Wide Web Conference, WWW'19, pages 72-82.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multi-label classification of patient notes: Case study on ICD code assignment",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Baumel",
"suffix": ""
},
{
"first": "Jumana",
"middle": [],
"last": "Nassour-Kassis",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshops of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "409--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Baumel, Jumana Nassour-Kassis, Raphael Co- hen, Michael Elhadad, and No\u00e9mie Elhadad. 2018. Multi-label classification of patient notes: Case study on ICD code assignment. In Proceedings of the Workshops of the Thirty-Second AAAI Confer- ence on Artificial Intelligence, pages 409-416.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An enhanced convolutional neural network model for answer selection",
"authors": [
{
"first": "Jiahui",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhenglu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jin-Mao",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion, WWW'17",
"volume": "",
"issue": "",
"pages": "789--790",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiahui Guo, Bin Yue, Guandong Xu, Zhenglu Yang, and Jin-Mao Wei. 2017. An enhanced convolutional neural network model for answer selection. In Pro- ceedings of the 26th International Conference on World Wide Web Companion, WWW'17, pages 789- 790.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "MIMIC-III, a freely accessible critical care database",
"authors": [
{
"first": "Alistair",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Li-Wei",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific Data",
"volume": "3",
"issue": "",
"pages": "16--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair Johnson, Tom Pollard, Lu Shen, Li-wei Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Celi, and Roger Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific Data, 3:16-35.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records",
"authors": [
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2015,
"venue": "Artificial Intelligence in Medicine",
"volume": "65",
"issue": "2",
"pages": "155--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakanth Kavuluru, Anthony Rios, and Yuan Lu. 2015. An empirical evaluation of supervised learn- ing approaches in assigning diagnosis codes to elec- tronic medical records. Artificial Intelligence in Medicine, 65(2):155-166.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.07122"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. arXiv preprint arXiv:1511.07122.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic icd-10 classification of cancers from free-text death certificates",
"authors": [
{
"first": "Bevan",
"middle": [],
"last": "Koopman",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Zuccon",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Medical Informatics",
"volume": "84",
"issue": "11",
"pages": "956--965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bevan Koopman, Guido Zuccon, Anthony Nguyen, An- ton Bergheim, and Narelle Grayson. 2015. Auto- matic icd-10 classification of cancers from free-text death certificates. International Journal of Medical Informatics, 84(11):956-965.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Molding CNNs for text: Non-linear, nonconsecutive convolutions",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1565--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding CNNs for text: Non-linear, non- consecutive convolutions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1565-1575.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshop at International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, G.s Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at International Conference on Learning Represen- tations, pages 1-12.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Explainable prediction of medical codes from clinical text",
"authors": [
{
"first": "James",
"middle": [],
"last": "Mullenbach",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Duke",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1101--1111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable pre- diction of medical codes from clinical text. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 1101-1111.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Large-scale multi-label text classification -revisiting neural networks",
"authors": [
{
"first": "Jinseok",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Jungi",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "F\u00fcrnkranz",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2014 European Conference on Machine Learning and Knowledge Discovery in Databases -Volume Part II",
"volume": "",
"issue": "",
"pages": "437--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinseok Nam, Jungi Kim, Iryna Gurevych, and Jo- hannes F\u00fcrnkranz. 2013. Large-scale multi-label text classification -revisiting neural networks. In Proceedings of the 2014 European Conference on Machine Learning and Knowledge Discovery in Databases -Volume Part II, pages 437-452.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Diagnosis code assignment: models and evaluation metrics",
"authors": [
{
"first": "Rimma",
"middle": [],
"last": "Adler Perotte",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Pivovarov",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Natarajan",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Weiskopf",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Wood",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association",
"volume": "21",
"issue": "2",
"pages": "231--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adler Perotte, Rimma Pivovarov, Karthik Natarajan, Nicole Weiskopf, Frank Wood, and No\u00e9mie El- hadad. 2013. Diagnosis code assignment: models and evaluation metrics. Journal of the American Medical Informatics Association, 21(2):231-237.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Condensed memory networks for clinical diagnostic inferencing",
"authors": [
{
"first": "Aaditya",
"middle": [],
"last": "Prakash",
"suffix": ""
},
{
"first": "Siyuan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Sadid",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Datla",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Joey",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3274--3280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaditya Prakash, Siyuan Zhao, Sadid Hasan, Vivek Datla, Kathy Lee, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2017. Condensed memory net- works for clinical diagnostic inferencing. In Pro- ceedings of the Thirty-First AAAI Conference on Ar- tificial Intelligence, pages 3274-3280.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fewshot and zero-shot multi-label learning for structured label spaces",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "2018",
"issue": "",
"pages": "3132--3142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Rios and Ramakanth Kavuluru. 2018. Few- shot and zero-shot multi-label learning for structured label spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, volume 2018, pages 3132-3142.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Towards automated ICD coding using deep learning",
"authors": [
{
"first": "Haoran",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Pengtao",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.04075"
]
},
"num": null,
"urls": [],
"raw_text": "Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, and Eric Xing. 2017. Towards automated ICD coding using deep learning. arXiv preprint arXiv:1711.04075.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A method for modeling co-occurrence propensity of clinical codes with application to icd-10-pcs auto-coding",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Subotin",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of the American Medical Informatics Association",
"volume": "23",
"issue": "5",
"pages": "866--871",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Subotin and Anthony Davis. 2015. A method for modeling co-occurrence propensity of clinical codes with application to icd-10-pcs auto-coding. Journal of the American Medical Informatics Asso- ciation, 23(5):866-871.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Grounded recurrent neural networks",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Vani",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.08557"
]
},
"num": null,
"urls": [],
"raw_text": "Ankit Vani, Yacine Jernite, and David Sontag. 2017. Grounded recurrent neural networks. arXiv preprint arXiv:1705.08557.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A neural architecture for automated ICD coding",
"authors": [
{
"first": "Pengtao",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1066--1076",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengtao Xie, Haoran Shi, Ming Zhang, and Eric P. Xing. 2018. A neural architecture for automated ICD coding. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics(Volume 1:Long Papers), pages 1066-1076.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-scale context aggregation by dilated convolutions",
"authors": [
{
"first": "Fisher",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Vladlen",
"middle": [],
"last": "Koltun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.07122"
]
},
"num": null,
"urls": [],
"raw_text": "Fisher Yu and Vladlen Koltun. 2016. Multi-scale con- text aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic ICD code assignment of chinese clinical notes based on multilayer attention birnn",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liangliang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhihui",
"middle": [],
"last": "Fei",
"suffix": ""
},
{
"first": "Fang-Xiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Biomedical Informatics",
"volume": "91",
"issue": "",
"pages": "103--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Yu, Min Li, Liangliang Liu, Zhihui Fei, Fang- Xiang Wu, and Jianxin Wang. 2019. Automatic ICD code assignment of chinese clinical notes based on multilayer attention birnn. Journal of Biomedical In- formatics, 91:103-114.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Event co-reference resolution via a multi-loss neural network without using argument information. Science China Information Sciences",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Zuo",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "62",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyu Zuo, Yubo Chen, Kang Liu, and Jun Zhao. 2019. Event co-reference resolution via a multi-loss neural network without using argument information. Sci- ence China Information Sciences, 62. A Appendix: English Version of Figure 2(a)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Two kind of semantic phenomenon: explicit semantic features and implicit semantic features.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "illustrates an example of the automatic coding for a Chinese Clinical note in our system (For the convenience of readers, the English version is included in the appendix A). The left ofFigure 2(a) is the free-text notes user entered, and the right ofFigure 2(a) is predicted codes and corresponding disease names. Figure 2 (b1) and Figure 2 (b2) are the visualization of supporting information for predictions. The detailed description is presented in the section 3.2.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "The whole architecture of the model. The input is the clinical text, and output is the ICD codes. The yellow dotted box indicates how to use attention-based dilated convolution to capture the implicit semantic of noncontinuous words. The green dotted box indicates how to use n-gram matching mechanism to capture the explicit semantic of continuous n-gram words.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "An example of the dilated convolution in processing text.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"text": "Detailed information for three datasets.",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"text": "Mullenbach et al., 2018) 0.0600 0.6755 0.8832 0.9808 0.6099 0.7651 0.8305 0.8458 0.9846 0.9902 0.8796 0.9579 Dilated CNN 0.1017 0.6997 0.8637 0.9772 0.6268 0.7864 0.8399 0.8523 0.9849 0.9904 0.8807 0.9550 N-gram Matching 0.1200 0.7050 0.9574 0.9915 0.6393 0.8036 0.8385 0.8543 0.9867 0.9922 0.8900 0.9640 DACNM 0.1116 0.7127 0.9520 0.9909 0.6430 0.8043 0.8452 0.8602 0.9878 0.9932 0.8895 0.9657",
"content": "<table><tr><td>Dataset</td><td>CN-Full</td><td/><td/><td>CN-50</td><td/><td/></tr><tr><td>Model</td><td>F1 Macro Micro Macro Micro AUC</td><td>k=5</td><td>R@k k=10</td><td>F1 Macro Micro Macro Micro AUC</td><td>R@k k=5</td><td>k=8</td></tr><tr><td>CAML(</td><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"text": "Evaluation on Chinese dataset CN-Full and CN-50.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}