Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "BJl2cMBHlN", "submission_url": "https://openreview.net/forum?id=BJl2cMBHlN", "submission_content": {"title": "CARE: Class Attention to Regions of Lesion for Classification on Imbalanced Data", "authors": ["Jiaxin Zhuang", "Jiabin Cai", "Ruixuan Wang", "Jianguo Zhang", "Weishi Zheng"], "authorids": ["zhuangjx5@mail2.sysu.edu.cn", "caijb5@mail2.sysu.edu.cn", "wangruix5@mail.sysu.edu.cn", "j.n.zhang@dundee.ac.uk", "wszheng@ieee.org"], "keywords": ["Attention Mechanism", "Imbalanced Data", "Small Samples", "Skin Lesion", "Pneumonia Chest X-ray"], "abstract": "To date, it is still an open and challenging problem for intelligent diagnosis systems to effectively learn from imbalanced data, especially with large samples of common diseases and much smaller samples of rare ones. Inspired by the process of human learning, this paper proposes a novel and effective way to embed attention into the machine learning process, particularly for learning characteristics of rare diseases. This approach does not change architectures of the original CNN classifiers and therefore can directly plug and play for any existing CNN architecture. Comprehensive experiments on a skin lesion dataset and a pneumonia chest X-ray dataset showed that paying attention to lesion regions of rare diseases during learning not only improved the classification performance on rare diseases, but also on the mean class accuracy. ", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "pdf": "/pdf/c628f4ac68531cd0a121f899b026019b00b3ec60.pdf", "paperhash": "zhuang|care_class_attention_to_regions_of_lesion_for_classification_on_imbalanced_data", "_bibtex": "@inproceedings{zhuang:MIDLFull2019a,\ntitle={{\\{}CARE{\\}}: Class Attention to Regions of Lesion for Classification on Imbalanced Data},\nauthor={Zhuang, Jiaxin and Cai, Jiabin and Wang, Ruixuan and Zhang, Jianguo and Zheng, Weishi},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=BJl2cMBHlN},\nabstract={To date, it is still an open and challenging problem for intelligent diagnosis systems to effectively learn from imbalanced data, especially with large samples of common diseases and much smaller samples of rare ones. Inspired by the process of human learning, this paper proposes a novel and effective way to embed attention into the machine learning process, particularly for learning characteristics of rare diseases. This approach does not change architectures of the original CNN classifiers and therefore can directly plug and play for any existing CNN architecture. Comprehensive experiments on a skin lesion dataset and a pneumonia chest X-ray dataset showed that paying attention to lesion regions of rare diseases during learning not only improved the classification performance on rare diseases, but also on the mean class accuracy. },\n}"}, "submission_cdate": 1545061011626, "submission_tcdate": 1545061011626, "submission_tmdate": 1561397213465, "submission_ddate": null, "review_id": ["ryx4RNc37V", "HkxEeD5l7V", "HyxOP5mJ4N"], "review_url": ["https://openreview.net/forum?id=BJl2cMBHlN¬eId=ryx4RNc37V", "https://openreview.net/forum?id=BJl2cMBHlN¬eId=HkxEeD5l7V", "https://openreview.net/forum?id=BJl2cMBHlN¬eId=HyxOP5mJ4N"], "review_cdate": [1548686540203, 1547900652341, 1548855903733], "review_tcdate": [1548686540203, 1547900652341, 1548855903733], "review_tmdate": [1548856757197, 1548856711557, 1548856678139], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper153/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper153/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper153/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJl2cMBHlN", "BJl2cMBHlN", "BJl2cMBHlN"], "review_content": [{"pros": "1. The paper attempts to address the classification problem in skin lesions and pneumonia in chest x-rays with a focus on 'paying attention' to the the ROI. It claims that focussing on the ROI of the minor class improves performance in cases of high data imbalance.\n\n2. The idea of forcing the Grad-CAM output to be inline with the bounding boxes is interesting. So is the idea of the 'inner' and 'outer' losses. \n\n3. The variety of experiments performed is extensive, and the improvement in results make a favourable case for the proposed method. ", "cons": "1. Authors' claim of improved performance in imbalanced data when attending to ROIs is backed solely by empirical evidence. At the outset, the improved performance can be attributed to higher loss values for the minority class, induced by the additional supervision in the form of bounding boxes. (since L_a = 0 for majority class which doesn't have bounding boxes, eq. 1?). A more rigorous backing in this regard would be of interest. Otherwise, the novelty of the work is limited.\n\n2. Attention is fully supervised in this case, and hence it should be made explicit that the term 'attention' here is not equivalent to its traditional counterparts in literature [1] [2]. \n\n3. The text seems to under-estimate the effort of requiring an additional annotations, even for the minority class. A dataset with 1 million examples with 10000 minority examples is still an imbalanced dataset. Also, it would be interesting to see if this ratio of this imbalance is crucial.\n\n4. Minor: L_g in text above eq. 1 is undefined. Few errors in text. \n\n\n[1] Oktay et al. 'Attention U-Net: Learning Where to Look for the Pancreas'. In: MIDL 2018.\n[2] Jetley et al. 'Learn to pay attention'. In: ICLR 2018", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "A well written paper with a clear motivation, interesting evaluations and good amount of detail.\n\nThe core idea of the paper is to optimize saliency maps during training in order to guide classification networks to attend to the expected image regions. According to the narrative of the paper, this aims at improving learning from few examples per class (although this should not be limited to class imablance scenarios). Explicitly optimizing to attend to salient regions appears to be the main novelty of the paper. The proposed approach is independent of choice of (deep) classification architecture, which of course is a nice property to have. The authors present a compelling analysis of how inter- and intra-rater variations, as simulated by bounding box tightness variations, affect the approach.", "cons": "The approach essentially solves the problem of having too few examples per class by using denser labels. Here the training of classification models -that rely on image labels- is improved by incorporating bounding boxes. This limits the method's applicability to datasets with bounding box labels, a domain on which object detectors are known to perform very well. One advantage though of the method here is that it allows to relax this requirement, in the sense that it is flexible to still be trained on classification alone for images/classes which are not labelled with bounding boxes. (As a meta comment: similar performance gains can be observed in recent object detectors which are typically trained using bounding boxes but can be improved by training on even denser labels, i.e. pixel-wise segmentation maps)\n\nThere are a few technical details that remain unmotivated or un-evaluated: Why is ImageNet pretraining required here? Why was the CARE-loss only used to finetune and not during training from the start? What data-augmentation (method 'DA') was employed? The appropriateness of the chosen metrics, recall and mean class accuracy, is not discussed either.\n\nThe impact of the learned attention/localization on the classification performance was evaluated. It would however also be interesting to evaluate the attention/localization itself in terms of appropriate metrics such as average precision instead of only showing a few test set examples in Fig. 2.\n\nThere exists a body of literature that employ saliency techniques (in part also building on Grad-CAM) in order to perform localization in the image space. Although these works appear to not explicitly optimize the obtained saliency maps, they could be discussed with the related work section.\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "- The authors shown a new method to deal with class imbalance by adding a new loss that will force the network activation into a previously labelled ROI.\n\n- They make a clever use application of a visualization technique ( Grad-CAM) into the learning process.\n\n- They make a pretty good validation on their technique and comparison with other approaches to deal with class imbalance.\n\n- It is a well written and well presented paper.", "cons": "- I am not sure the technique should be called attention since it is fully supervised.\n\n- The authors claim that their selection of bbox or alpha parameters do not change the final result. This is clearly not the case on the Recall of the pneumonia dataset. I think this is likely because in the skin cancer dataset their CARE method does not make a huge improvement with respect to the other augmentation methods (table 1) since in most of the images are centre around the area of interest anyway (as shown in Fig 2). However, in the pneumonia dataset selecting relevant ROI makes a lot of difference because the lesions are multiple and interspaced around the image. Therefore they will be more affected with by the ROI selected or the weight in the attention loss.", "rating": "3: accept", "confidence": "1: The reviewer's evaluation is an educated guess"}], "comment_id": ["BkxyNihXNN", "rkle8DZm44", "Hyl5vIb7VE", "Syg3ss8WrN"], "comment_cdate": [1549155111021, 1549109063765, 1549108833631, 1550048163536], "comment_tcdate": [1549155111021, 1549109063765, 1549108833631, 1550048163536], "comment_tmdate": [1555946039097, 1555946038832, 1555946038573, 1555945959378], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper153/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper153/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper153/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper153/Area_Chair1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "No Title", "comment": "We thank the reviewer for their helpful comments, and we are really pleased to hear they enjoyed the reading of our manuscript. Please find below our responses to the comments, based on which we will update in our final paper.\n\n\nC1: \u201c... This limits the method's applicability to datasets with bounding box labels, a domain on which object detectors are known to perform very well. One advantage though of the method here is that it allows to relax this requirement, in the sense that it is flexible to still be trained on classification alone for images/classes which are not labelled with bounding boxes.\u201d\n\nR1: Yes, we agree that our method indeed could leverage the bounding box information (if available in the training) from the minority class to improve the classification performance, in particular for the accuracy on the minority class during testing. Considering that the minority class consists of just a few hundreds of (or fewer) training images, it should be a relatively easy task for the clinicians to provide bounding box annotations for those images in the minority class. As indicated by this reviewer, the bounding box annotations could be generated automatically by an object detector, but only when the detectors perform very well such that the detected bounding boxes did correspond to the lesion regions in each training image in the minority class. In addition, as pointed out by the reviewer, one advantage of our method is its flexibility in training the classifier without bounding boxes.\n\n\nC2: \u201cWhy is ImageNet pretraining required here?\u201d\n\nR2. ImageNet pre-training is widely adopted as a default choice when training data is limited in the medical domain. In our experiments, it speeded up the convergence of the training, thus helped to improve the classification performance within limited training time, however, it does not alter the findings in the experimental evaluation. We will add a discussion about this in the revised version.\n\n\nC3: Why was the CARE-loss only used to finetune and not during training from the start?\n\nR3: We experimentally found that fine-tuning the pre-trained model with the CARE- loss performs stable and better than training the whole loss from scratch. This is probably because pre-training the model with the cross entropy can help to find a better initial start for the fine-tuning phase. In contrast, training the model from scratch could distract the model from learning to extract discriminative features between classes, by focusing too much on the \u2018attention loss\u2018 term.\n\nC4. What data-augmentation (method 'DA') was employed?\n\nR4. We used the common data augmentation technique, including rotation, flip, color jitter.\n\n\nC5. The appropriateness of the chosen metrics, recall and mean class accuracy, is not discussed either.\n\nR5: Our dataset is unbalanced. The use of the OVERALL classification accuracy would have led to a biased estimate of the performance, i.e., the performance could be dominated by the majority class, thus overlooking the minor class. Therefore, we considered to use the mean class accuracy as an unbiased estimate of the performance on all the classes, which is a sensible metric when facing data imbalance. The mean class accuracy provides a global picture of the performance on the test dataset, while the use of recall could give a zoomed evaluation of the performance on the minority class.\n\n\nC6: \u201c... It would however also be interesting to evaluate the attention/localization itself in terms of appropriate metrics such as average precision instead of only showing a few test set examples in Fig. 2.\u201d\n\nR6: Thank you for good suggestion! We will perform and add relevant quantitative evaluation as suggested, for example, using average AUC (area under curve) value as the metric, although it is not straightforward on how to choose a specific localization threshold for each activation map.\n\n\nC7: \u201cThere exists a body of literature that employ saliency techniques ... they could be discussed with the related work section.\u201d\n\nR7: We will revise the related work as suggested in this paper."}, {"title": "No Title", "comment": "We thank the reviewer for their helpful comments! Hope the clarifications below could help reduce the concerns. \n\u00a0\nC1: \u201cAuthors' claim of improved performance in imbalanced data when attending to ROIs is backed solely by empirical evidence. At the outset, the improved performance can be attributed to higher loss values for the minority class, induced by the additional supervision in the form of bounding boxes. (since L_a = 0 for majority class which doesn't have bounding boxes, eq. 1?). A more rigorous backing in this regard would be of interest. Otherwise, the novelty of the work is limited.\u201d\n\u00a0\nR1: We proposed a new approach to improving the classification performance of minority classes (e.g., rare diseases), \n by explicitly embedding the \u2018attention\u2019 into the learning process of neural networks. Extensive experiments support that the proposed approach is effective. In this study, we did not aim to provide a theoretical proof of the proposed approach. But in order to further back up our idea, i.e., it is the attention loss for the minority class rather than just adding more weight to the minority class that improved the classification performance, we did one more experiment in comparing the effect of the attention loss with that of more weight on the minority class. The new experiment showed that simply adding more weight for the minority class did not always improve the classification performance on the minority class and actually severely downgraded the performance on the majority class(es). In comparison, our approach (with the attention loss) always performed better, which is consistent with the extensive experimental results in the current version of the paper. We will add the new results in the final version.\n\u00a0\n\u00a0\nC2: \u201cAttention is fully supervised in this case, and hence it should be made explicit that the term 'attention' here is not equivalent to its traditional counterparts in literature [1] [2].\u201c\n\u00a0\nR2: Thank you for the good suggestion! We will make it more explicit, that the \u2018attention\u2019 here is not equivalent to that in traditional literatures, by clarifying that the \u2018attention\u2019 is provided in the form of bounding boxes in advance and only used during training for the minority class. \n\u00a0\n\n\u00a0C3: \u201cThe text seems to under-estimate the effort of requiring an additional annotations, even for the minority class. A dataset with 1 million examples with 10000 minority examples is still an imbalanced dataset. Also, it would be interesting to see if this ratio of this imbalance is crucial.\u201d\n\u00a0\nR3: We would like to kindly remind that the purpose of the study is to improve the classification performance for the minority (rare disease) classes WHEN the minority class contains few (hundreds or even fewer) training data. Those data is very difficult to acquire particularly for rare disease in the medical domain, and may under represent that whole category. Therefore, the minority class containing 10000 data would arguably not be considered a \u2018minor\u2019 class in our scenario, in which case training the model with general data imbalance techniques (e.g., by adding class weight) may be enough, requiring less images to be annotated. For the effect of the ratio of imbalance between classes, we will perform an additional relevant experiment and include the result and discussions in the final paper.\n\n\u00a0\nC4. \u201cMinor: L_g in text above eq. 1 is undefined. Few errors in text.\u201d \n\u00a0\nR4. Thank you for pointing out the typo. We will correct it. "}, {"title": "No Title", "comment": "Thank you for positive and constructive comments! \n\u00a0\nC1: \u201cI am not sure the technique should be called attention since it is fully supervised.\u201d\n\u00a0\nR1: It's indeed that the term *attention* used in our paper does not match exactly what was meant in some literature. The reason we still coin it as attention because the inspiration of the proposed approach comes from the learning/training process of human (e.g., clinician trainees) to recognize disease with few data, i.e., by focusing their attention to lesion regions during learning. We will clarify further the difference between the attention here and that in relevant literature in the final version.\n\u00a0\u00a0\n\nC2: \u201cThe authors claim that their selection of bbox or alpha parameters do not change the final result. This is clearly not the case on the Recall of the pneumonia dataset. I think this is likely because \u2026\u201d\n\u00a0\nR2: Thank you for the detailed inspection and the very reasonable explanation! We will add such discussion in the final paper.\n"}, {"title": "Convincing rebuttal", "comment": "This paper presents a new approach to learning from imbalanced data and validates it using two medical applications: Skin image classification (with multiple disease catetories) and pneumonia detection in chest X-ray images. The approach is based on an additional labeling step for the rare cases (bounding box generation) and explicitly embeds \u201eattention\u201c into the learning process of a neural network. The latter is achieved by adding a new loss to the classification network that forces the network to attend to the labelled regions of interest. Comprehensive experiments suggest that the approach is effective.\n\nThe judgement of the paper was mixed but mainly positive (2 accepts, 1 reject). The reviewers agree that the paper addresses a relevant problem with an interesting approach. They also highlight the relevant property that the method is independent of the choice of (deep) classification architecture and thus widely applicable. Finally, they agree that the manuscript is well-written and includes a convincing experimental analysis including a comparison to state-of-the-art competing approaches to addressing class imbalance.\n\nThe points of criticism raised were:\n\n1.\tThe absence of more examples of one class is compensated by more resource-intensive labeling of that class (R2,R3). While this is an inherent property of the approach, it can be argued that the method has specifically been designed for cases when only up to few hundred samples are available for the rare class, and hence, human labeling effort is acceptable.\n\n2.\tThe usage of the term \u201eattention\u201c should be reconsidered due to the supervised learning approach (R1,R2). The authors agree and aim to address this point in the revised manuscript.\n\n3.\tA more thorough discussion of the contribution in the context of the state of the art is required (R2,R3). This will be addressed in the revised version of the manuscript according to the authors\u2018 rebuttal.\n\n4.\tSeveral aspects related to methodology (R2,R3) and experiments (R1-3) require further analysis/clarification, as detailed in the individual reviews. The authors provided point-to-point responses corresponding to these comments. Moreover, they performed an additional experiment in order to show that simply adding more weight to the minority class does not generally increase performance on the rare class and typically downgrades performance on the majority class(es). \n\nDue to the general support of the reviewers and the convincing rebuttal, I suggest acceptance of the paper.\n\n\n"}], "comment_replyto": ["HkxEeD5l7V", "ryx4RNc37V", "HyxOP5mJ4N", "BJl2cMBHlN"], "comment_url": ["https://openreview.net/forum?id=BJl2cMBHlN¬eId=BkxyNihXNN", "https://openreview.net/forum?id=BJl2cMBHlN¬eId=rkle8DZm44", "https://openreview.net/forum?id=BJl2cMBHlN¬eId=Hyl5vIb7VE", "https://openreview.net/forum?id=BJl2cMBHlN¬eId=Syg3ss8WrN"], "meta_review_cdate": 1551356596874, "meta_review_tcdate": 1551356596874, "meta_review_tmdate": 1551881982840, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This paper presents a new approach to learning from imbalanced data and validates it using two medical applications: Skin image classification (with multiple disease catetories) and pneumonia detection in chest X-ray images. The approach is based on an additional labeling step for the rare cases (bounding box generation) and explicitly embeds \u201eattention\u201c into the learning process of a neural network. The latter is achieved by adding a new loss to the classification network that forces the network to attend to the labelled regions of interest. Comprehensive experiments suggest that the approach is effective.\n\nThe judgement of the paper was mixed but mainly positive (2 accepts, 1 reject). The reviewers agree that the paper addresses a relevant problem with an interesting approach. They also highlight the relevant property that the method is independent of the choice of (deep) classification architecture and thus widely applicable. Finally, they agree that the manuscript is well-written and includes a convincing experimental analysis including a comparison to state-of-the-art competing approaches to addressing class imbalance.\n\nThe points of criticism raised were:\n\n1.\tThe absence of more examples of one class is compensated by more resource-intensive labeling of that class (R2,R3). While this is an inherent property of the approach, it can be argued that the method has specifically been designed for cases when only up to few hundred samples are available for the rare class, and hence, human labeling effort is acceptable.\n\n2.\tThe usage of the term \u201eattention\u201c should be reconsidered due to the supervised learning approach (R1,R2). The authors agree and aim to address this point in the revised manuscript.\n\n3.\tA more thorough discussion of the contribution in the context of the state of the art is required (R2,R3). This will be addressed in the revised version of the manuscript according to the authors\u2018 rebuttal.\n\n4.\tSeveral aspects related to methodology (R2,R3) and experiments (R1-3) require further analysis/clarification, as detailed in the individual reviews. The authors provided point-to-point responses corresponding to these comments. Moreover, they performed an additional experiment in order to show that simply adding more weight to the minority class does not generally increase performance on the rare class and typically downgrades performance on the majority class(es). \n\nDue to the general support of the reviewers and the convincing rebuttal, I suggest acceptance of the paper.\n\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BJl2cMBHlN¬eId=Bklp2f8B8V"], "decision": "Accept"} |