Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 29,583 Bytes
fad35ef |
1 |
{"forum": "Byl6W7WeeN", "submission_url": "https://openreview.net/forum?id=Byl6W7WeeN", "submission_content": {"title": "Physical Attacks in Dermoscopy: An Evaluation of Robustness for clinical Deep-Learning", "authors": ["David K\u00fcgler", "Andreas Bucher", "Johannes Kleemann", "Alexander Distergoft", "Ali Jabhe", "Marc Uecker", "Salome Kazeminia", "Johannes Fauser", "Daniel Alte", "Angeelina Rajkarnikar", "Arjan Kuijper", "Tobias Weberschock", "Markus Meissner", "Thomas Vogl", "Anirban Mukhopadhyay"], "authorids": ["david.kuegler@gris.tu-darmstadt.de", "andreasmichael.bucher@kgu.de", "johannes.kleemann@kgu.de", "alexander.distergoft@gris.tu-darmstadt.de", "ali.jabhe@gris.tu-darmstadt.de", "marc.uecker@gris.tu-darmstadt.de", "salome.kazeminia@gris.tu-darmstadt.de", "johannes.fauser@gris.tu-darmstadt.de", "daniel.alte@gris.tu-darmstadt.de", "angeelina.rajkarnikar@gris.tu-darmstadt.de", "arjan.kuijper@igd.fraunhofer.de", "tobias.weberschock@kgu.de", "markus.meissner@kgu.de", "t.vogl@em.uni-frankfurt.de", "anirban.mukhopadhyay@gris.tu-darmstadt.de"], "keywords": ["Dermoscopy", "Vulnerabilities of Deep Learning", "Adversarial Examples", "Physical World Attacks", "Real Clinical Attacks", "Skin Cancer"], "TL;DR": "We successfully attack Deep Learning for skin lesion diagnosis with simple physical world attacks showing its susceptibility.", "abstract": "Deep Learning (DL)-based diagnostic systems are getting approved for usage as fully automatic or secondary opinion products. This development derives from the achievement of expert-level performance by DL across several applications (e.g. dermoscopy and diabetic retinopathy). While recent literature shows their vulnerability to imperceptible digital manipulation of the image data (e.g. through cyberattacks), the performance of medical DL systems under physical world attacks is not yet explored. This problem demands attention if we want to safely translate medical DL research into clinical practice. In this paper, we design the first small-scale prospective evaluation addressing the vulnerability of DL-dermoscopy systems under physical world attacks in absentia of knowledge about the underlying DL-architecture. We publish the entire dataset of collected images as Physical Attacks on Dermoscopy (PADv1) for public use. The evaluation of susceptibility and robustness reveals that such attacks lead to on average 31% accuracy loss across popular DL-architectures. The DL diagnosis is changed by the attack in one of two cases even without any knowledge of the DL method.", "pdf": "/pdf/aa20506b7658a644dba02b353fb8245223a8dc74.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "k\u00fcgler|physical_attacks_in_dermoscopy_an_evaluation_of_robustness_for_clinical_deeplearning"}, "submission_cdate": 1544717061397, "submission_tcdate": 1544717061397, "submission_tmdate": 1545069832281, "submission_ddate": null, "review_id": ["rylM4pptfN", "H1eK8cSnmN", "HJx9vuDUGV"], "review_url": ["https://openreview.net/forum?id=Byl6W7WeeN¬eId=rylM4pptfN", "https://openreview.net/forum?id=Byl6W7WeeN¬eId=H1eK8cSnmN", "https://openreview.net/forum?id=Byl6W7WeeN¬eId=HJx9vuDUGV"], "review_cdate": [1547455786315, 1548667473135, 1547233378470], "review_tcdate": [1547455786315, 1548667473135, 1547233378470], "review_tmdate": [1549778551250, 1548856751847, 1548856703439], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper60/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper60/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper60/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Byl6W7WeeN", "Byl6W7WeeN", "Byl6W7WeeN"], "review_content": [{"pros": "- The authors investigate how the predictions of several standard convolutional neural network architectures trained for dermoscopic image classification change when artificial elements are added to the skin area before taking the image. This is certainly an interesting topic on which there is not much prior work in the medical domain.\n\n- Several network architectures and several types of attacks are compared.", "cons": "- The authors only investigated whether the confidence of the network is affected or whether the predicted lesion category is changed. However, it seems more logical that actual attacks would aim at changing the output in a specific way, for instance to a specific output category. These kind of attacks are not attempted and there are also no details on how the network output changes (do the networks all favor a certain category, i.e., if the category is changed due to an attack does the output always change to that category?).\n\n- Initially, the question is posed: \u201cCan physical world attacks from the clinical setting severely affect the performance of popular DL architectures?\u201d - I believe it would make the paper stronger if this would be toned down a bit. The answer is obviously yes since basically out-of-distribution examples are presented to the networks so that a lower/different performance is the expected result. I think it would improve the paper if the authors would instead just write that they are interesting in evaluating how such examples affect the performance.\n\n\nMinor comments:\n\n- In section 3.1, it is not clear what \u201cThe fine-tuned architecture consists of \u2026\u201d refers to. Is this the architecture of MobileNet, or are these some additional layers attached to each network?\n\n- Class weights are mentioned, but please explicitly state how different classes were weighted in the loss function.\n\n- It is confusing that the datasets are described relatively late in the manuscript. I would suggest moving section 3.3 before section 3.1.\n\n- In the caption of Table 1, it could be explicitly mentioned why no experiments with red lines were conducted.\n\n- In the PADv1 dataset, how was the ground truth verified? \n\n- In the results section, I found it confusing that first the results of the attack experiments and thereafter the baseline results of the clean images are presented.\n\n- The caption of Table 2 could mention (preferably in words, not as formula) what the robustness score expresses.\n\n- When referring to Tables and Figures, the words \u201cTable\u201d and \u201cFigure\u201d should be capitalized everywhere.\n\n- It is not really clear why calculating a weighted accuracy is not possible for the PADv1 dataset.\n\n- In the discussion, the authors write \u201cWe show small artifacts captured from the real world can significantly reduce the accuracy of DL diagnosis where dermatologists would not be impacted.\u201d - this should be toned down as well (e.g., \u201cwould LIKELY not be impacted\u201d) since it was not actually shown in this work that dermatologists are not impacted in their diagnosis.", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This is an interesting paper presenting the discovery about physical attacks in dermoscopy.\nRobustness is very important in deep learning based methods. This paper studied the robustness and susceptibility of various deep learning architectures under physical attach. \n", "cons": "The experimental dataset is relatively small, which may be subject to vulnerability;\nAlthough the discovery is interesting, I would suggest if authors can propose some methods for increasing the robustness of deep learning methods, this is more insightful.\nIn table 1, the authors listed several physical attach types for dermoscopy applications. Is it comprehensive or how much it's related to the clinical setting?\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The work looks into the interesting problem of evaluating robustness of DL models in clinical settings, with a focus on scenarios where adversarial examples are used.\n\nThe research is novel as it explores the use of robustness to \u2018physical world attacks\u2019 as an approach to model evaluation, which has not previously been investigated in the medical imaging literature.\n\nOverall, the paper is written clearly and the methodology is well designed. Long term, high impact application of the work is feasible. The work also makes publicly available a new dataset (PADv1), which would make it easy to reproduce elements of this work by others. \n", "cons": "-Title:\n--------\nIf the reader is not familiar with the ML adversarial attacks literature, terms such as 'physical attacks in dermoscopy' may be confusing at the first instance. Perhaps the title can be rephrased to help convey the message of the paper. \n\nIntroduction:\n------------------\n- \u201cWhile medical systems empowered by Deep Learning (DL) are getting approved for clinical procedures ...\u201d\nPlease support by referring to examples of such systems that have received approvals. \n\n- \u201cphysical world attacks are constrained to changing the appearance of the region under consideration in the real world \u2026 \u201d\nTo ensure a robust argument is made for motivating the paper, please comment on how realistic such \u2018attacks\u2019 are in clinical settings. If they are not performed by the clinicians themselves, the attacker would need to go through a great deal of, perhaps unrealistic, effort to draw on a patient\u2019s skin, taking dermatology as an example. \n\n- \u201cWherever there is money to be made, some people will exploit the opportunity and abuse ambiguities, which is shown by cyber threats ...\u201d \nPlease rephrase. It doesn\u2019t read well.\n\nMethods:\n-------------\n- It is unclear what the deep models were trained to classify. Were they initially trained to classify each image into one of the seven classes that were pathologically verified? If so, does the classification problem remain the same when using applying the models on the images from the new dataset? Please clarify.\n\n- \u201cAll lesions are non suspicious for melanoma ...\u201d\nWhat is the significance of this? And also note that a large number of readers would not necessarily be familiar with terms common in dermatology.\n\nResults & Discussion:\n------------------------------\n- It appears that susceptibility is measured on a negative scale, i.e. the lower the number the more susceptible the system is. Please confirm and clarify in the text (not only figure caption) if this is true.\n\n-Accuracy on its own is generally not sufficient as an evaluation metric. It would be interesting to see how susceptibility and robustness metrics derived from, say, sensitivity and specificity of the models, compare to the currently reported observations. \n\n- Please elaborate on the limitations of this work.", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["Skgu5vEoEV", "BJls6_Vj4V", "rJgfOa4iV4", "Bkx_0Gri44", "ryxZnHHjV4"], "comment_cdate": [1549645711961, 1549646019451, 1549647209961, 1549648592298, 1549649321469], "comment_tcdate": [1549645711961, 1549646019451, 1549647209961, 1549648592298, 1549649321469], "comment_tmdate": [1555946006824, 1555946006110, 1555946005851, 1555946005150, 1555946004885], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper60/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper60/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper60/AnonReviewer1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper60/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper60/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "We thank for the evaluation and answer questions and comments.", "comment": "We thank Reviewer3 for the appreciation of the relevance of physical attacks in medical imaging and the significance of robustness in clinically applied Deep Learning.\n\nReviewer3 points out several weaknesses of our work, which we address in the order presented.\n\nReviewer3 criticizes the size of the dataset. Looking at a cost-benefit analysis w.r.t. dataset size, we believe more than 100 samples are sufficient to identify and describe the problem as well as to estimate performance in an evaluation. In our opinion, the benefit of acquiring more samples does not justify the required effort. In fact, there cannot be a representative dataset covering all possibilities, because the major challenge in cyber security issues is that attackers can choose any weakness they can identify and perfect an attack strategy, while defenders need to defend against all possible attack vectors. In light of this asymmetry of cyber security, the challenge to finding \u201ca solution\u201d is to be independent to this very large attack variability and robust to unknown attack patterns. While we believe additional attack patterns justify additional attention in future work (specifically less obvious attack patterns and excluded pathologies), for the general identification and establishment of the presented vulnerability, a significantly larger dataset by itself does not justify the required time of professionals and test subjects.\n\nReviewer3 identifies the lack of a solution for the investigated problem. We agree, that finding a solution to this problem is very interesting. However, with no datasets published and no previous definition of robustness considering \u201cphysical attacks\u201d, the novelty of this paper is to bring clinically relevant \u201cphysical attacks\u201d using \u201cout-of-distribution examples\u201d to the attention of the community. While it might be obvious, that \u201cout-of-distribution examples\u201d can lead to a drop in deep learning performance, this work is the first to show proof, curate a dataset to benchmark for robustness against \u201cout-of-distribution examples\u201d and present metrics for evaluation. As deep learning solutions are brought to clinical settings, there is no general agreement on robustness evaluations yet. Like others before it (Paschali et al., 2019), this paper presents and argues for the evaluation of robustness to be considered for deep learning for clinical applications. To this effect we use images that \u2013 to a degree \u2013 infringe on the acquisition protocol.\n\nReviewer3 asks about the attack patterns and their relation to the clinical setting. Table 1 lists all the patterns we generated, but due the large degree of variability more can be imagined. We specifically designed these attacks in an interdisciplinary team of computer scientists and dermatologists to mimic variation that might be encountered in a real-world scenario such as bruising, hair, etc. Nevertheless the attack designs also consider differences between common colors and common geometric patterns to exploit possibilities for confusion between classes by the network.\n"}, {"title": "We thank the Reviewer for many interesting ideas and comments. We improved our work based on the suggestions, including a new evaluation.", "comment": "We thank Reviewer1 for recognizing the relevance, novelty and rigor present in our evaluation. We appreciate the advice and interesting ideas as well as the chance to address them here. We do so in the order of the comments of the Review.\n\nReviewer1 asks for additional information on targeted attacks: Whether our method can change the decision of the network in a specific way, i.e. control the diagnosis by the attack. We find this idea interesting and provided an additional analysis (Table 4) to increase the impact of this work. While our attack strategies were not designed with targeted attacks in mind, our analysis revealed such attacks are possible.\n\nNew Table with caption (not readable if ASCII-encoded): \nhttps://www.gris.tu-darmstadt.de/short/PADTable4\nanonymous: https://pasteboard.co/I0eTCWI.png\n\nNew Text to discuss this Table: \u201cAlthough our experiments are not designed for attacks tailored to a specific class, we evaluate the prediction outcome dependent on the attack pattern. For successful attacks, Table 4 shows the likelihood of the other classes being predicted instead. While Melanoma is diagnosed with high chance across different patterns, \u201cvascular lesion\u201d is a very likely diagnosis for both flavors of \u2018Red Acrylic Dot\u2019-Attacks given the attack succeeded. This observation indicates that designing physical world attacks targeting specific outcomes is also possible.\u201d\n\nReviewer1 suggests to tone down the wording to make the paper stronger in introduction and discussion. In the Introduction, we changed to \u201cHow much is the performance of popular DL architectures in the clinical setting affected by out-of-distribution examples, if used as physical world attacks?\u201d from \u201cCan physical world attacks from the clinical setting severely affect the performance of popular DL architectures?\u201d \n\nMinor Comments:\n\nWe appreciate the detailed feedback and the potential to improve the paper. In most cases these comments led to minor changes, which improved the overall quality of the paper. Here we document some changes by putting new and old version next to each other.\n\nWe changed to \u201cWe modify these five architectures to a unified design of the fully-connected part (Global Average Pooling, Dropout (75%), a fully-connected layer (1024 units, ReLU activation), Dropout (75%), and a fully-connected layer with 7 outputs and softmax activation).\u201d Instead of \u201cThe fine-tuned architecture consists of Dropout (75%), a fully-connected layer (1024 units, ReLU activation), Dropout (75%), and a fully-connected layer with 7 outputs and softmax activation behind an average-pooling layer.\u201d\n\n\u201c[\u2026] we use class-weights (reciprocal frequency) to [\u2026]\u201d instead of \u201c[\u2026] we use class-weights to [\u2026]\u201d\n\nWe moved the Dataset description to Section 3.1.\n\nWith regards to red lines, we thought to imitate hair with black lines and did not think of imitating line-shaped bruising. We recognize the opportunity to extend the dataset by this type of attack, however there is not enough time to do so in the rebuttal phase. We plan to do so for a future update of PADv1 to PADv1.1 .\n\nThe ground truth is verified by clinical evaluation of the skin lesions by a dermatologist. We point Reviewer1 to subsection 3.1 Datasets (previously 3.3): \u201cPADv1 images are acquired and diagnosed by a dermatologist using a combination of a DermLite 2 Pro HR dermoscope and an iPhone 7. [\u2026] not histologically verified\u201d.\n\nWe moved the baseline performance to 4.1.\n\nTable 2 caption: \u201cRobustness of different architectures to seven Physical World Attacks: ratio of DL-predictions changed by the attack; Acronyms of attacks are [\u2026]\u201d instead of \u201cRobustness of different architectures to seven Physical World Attacks; Acronyms of attacks are [\u2026]\u201d\n\nWe capitalized Tables and Figures.\n\nAs Reviewer1 states, in general, weighted accuracy is a better indicator of performance than unweighted accuracy. However, since the PADv1-dataset is relatively small and \u201cdominated by melanocytic nevi\u201d (Subsection Datasets), weighted accuracy can lead to misleading conclusions. In particular, for we have a maximum of only 2 images of vascular lesions and no images for some other lesions in the clean dataset making the evaluation unrepresentative.\n\nIn the Discussion, we adopted Reviewer1\u2019s phrasing \u201cwould likely not be impacted\u201d, because we really did not have a blinded group of dermatologists, who rated the lesions vs. \u201cdeep learning\u201d."}, {"title": "Response", "comment": "Thank you for taking the time to reply to each of my comments. Do you have any intuition why \"Melanoma\" has a high probability across all attacks while other classes have usually low probability for most attacks and seem triggered only by specific patterns (maybe more what I would expect, this is pattern recognition after all)?"}, {"title": "We thank the Reviewer for many interesting ideas and comments and improved our work based on the suggestions, which includes extra evaluation published on our website.", "comment": "We thank Reviewer2 for the feedback and comments. Reviewer2 appreciates the relevance, novelty and soundness of the work and even predicts a chance for high impact of the work. Like Reviewer2, we are also very interested on how other models perform on PADv1. We address the comments in the order of the Review.\n\nTitle:\n------\nReviewer2 suggests to adjust the title \u201cto help convey the message of the paper\u201d. We kept the title as it is to bridge the gap between medical imaging and computer vision. Keeping the words \u201cphysical\u201d and \u201cattacks\u201d in increases the visibility of the work to the computer vision community. \n\nIntroduction:\n------------------\nWe provided a citation for the approval of IDx, a DL-based fully automatic diagnosis system for referral to ophthalmologists. [Bill Berkrot: U.S. FDA approves AI device to detect diabetic eye disease. Reuters]\n\nAs an interdisciplinary team of authors from computer science and dermatology, we believe that dermatologists might too easily resolve to surgery, if an incorrect positive diagnosis was given by the deep learning system. This can happen in scenarios like the following: 1) the ink of a tattoo, 2) markings made by an assistant to highlight the lesions of interest for diagnosis or surgery \u2013 this is widespread practice in the clinical setting and 3) the patient is \u201cself-diagnosing\u201d by using a smart-phone apps such as those reported in the literature, where a doctor is not present between image acquisition and analysis. In general, for attacks we assume that either the patient or the healthcare provider are \u201cattacking\u201d the deep learning decision system. \n\n\u201cGiven the deployment of an exploitable system, history has shown that its exploitation frequently happens, which is shown by [\u2026]\u201d instead of \u201cWherever there is money to be made, some people will exploit the opportunity and abuse ambiguities, which is shown by [\u2026]\u201d\n\nMethods:\n------------\nRegarding the setup of our classification problem, we confirm the understanding presented by Reviewer2. The classification problem stays the same independent of HAM10000, PADv1-clean and PADv1-attacked. We clarify the text by \u201cModels are trained for 85 epochs [\u2026].\u201d instead of \u201cModels are trained to the multi-class classification problem (7 classes of the dataset) for 85 epochs [\u2026].\u201d\n\nSince \u201cMalignant Melanoma\u201d is a potentially lethal cancer with rising incidences worldwide, the early detection and the reliable differentiation from benign skin lesions (e.g. ordinary moles) is essential for the patient outcome. As the application field of AI in dermoscopy would be pre-diagnosis or whether to refer the patient to a dermatologist (e.g. melanoma screening), the evaluation of \u201cAll lesions are non-suspicious for melanoma\u201d implies no such referral/ further testing is required. In the clinical context, incorrect classification (diagnosis) could lead to either undetected melanoma or unnecessary consultation of physicians or even worse to unnecessary surgery. \n\nResults & Discussion:\n----------------------------\nAs described by Reviewer2, susceptibility is measured on a negative scale. We wanted to keep the association of negative effect to negative number, positive effect to positive number. We added the following sentences to the text explaining this: \u201cNegative susceptibility-values imply reduced performance. Susceptibility is lower-bound by -1, which is the worst case of zero accuracy under attack. Ideal robust systems have a susceptibility score of zero and no difference in accuracy between clean and attacked images.\u201d\n\nAdditionally, we change the caption of the Figure to \u201cSusceptibility to physical world attacks (negative scale), worst-case: -1 (no attacked image correctly predicted); larger than 0: accuracy on PADv1-attacked larger than on PADv1-clean; [\u2026]\u201d from \u201cSusceptibility to physical world attacks, minimum: -1 (worst case, no attacked image correctly predicted); larger than 0: accuracy on PADv1-attacked larger than on PADv1-clean; [\u2026]\u201d\n\nReviewer2 is interested in the relation of susceptibility and robustness to specificity and sensitivity. While we are happy to provide extra information on those relationships, we did not find any interesting relationships worthy of introduction into the paper itself. We published these extra figures on our webpage: https://www.gris.tu-darmstadt.de/short/PADSensitivitySpecificity\n\nThe limitations in the context of the clinical scenario and realism are: \n- Attack properties: some markings close to the lesion, more variations possible, arbitrary patterns, not based on real world definition or defined by previous testing\n- Data quality: no pathological verification \n- Clinical relevance: the detection by the operator is relatively easy, attack patterns were motivated from analyzing possible confusion by the DL system rather than from the clinical scenario and future work should focus purely on a clinical scenario. "}, {"title": "\"Melanoma\" intution", "comment": "My intuition is that Melanoma - on average - has the highest contrast for healthy to pathological skin. As such our - relatively - high contrast markings might have a higher similarity. In fact, this observation is consistent with the lowest contrast (BLOP) showing the lowest score.\nIn addition, some of our attacks were designed with the assumption in mind that a dark spot might be confused with a dark Melanoma."}], "comment_replyto": ["H1eK8cSnmN", "rylM4pptfN", "BJls6_Vj4V", "HJx9vuDUGV", "rJgfOa4iV4"], "comment_url": ["https://openreview.net/forum?id=Byl6W7WeeN¬eId=Skgu5vEoEV", "https://openreview.net/forum?id=Byl6W7WeeN¬eId=BJls6_Vj4V", "https://openreview.net/forum?id=Byl6W7WeeN¬eId=rJgfOa4iV4", "https://openreview.net/forum?id=Byl6W7WeeN¬eId=Bkx_0Gri44", "https://openreview.net/forum?id=Byl6W7WeeN¬eId=ryxZnHHjV4"], "meta_review_cdate": 1551356604241, "meta_review_tcdate": 1551356604241, "meta_review_tmdate": 1551703144841, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "While two reviewers have suggested to accept the paper, one other has a stronger negative vote.\n\nAnonReviewer1 and AnonReviewer2 have voted positively, but their arguments don't point out specific technical details of the method that are novel. They have more points in their \"cons\" than the \"pros\". AnonReviewer2 has raised a serious concern on the motivation of the paper.\n\nIn my view, I concur with the several of the issues raised by all reviewers and I present my views below.\n\nThe paper presents an analysis of the robustness of deep neural networks based classifiers in clinical scenarios. The context of the perturbation is the so-called \"physical attacks\".\n\nWhile I consider the analysis of robustness of the network to perturbations in the form of \"correlated noise\" (paintings, in the paper) interesting, the three enumerated motivations provided on Page 1 (2nd paragraph) aren't clear at all. For instance, if a rogue element wanted to break the diagnostic ability, why create the artifacts in this way only ? Simply introducing any atypical object in the scene (a piece of paper, cloth, finger, pen, etc.), will cause the system to fail. Why worry about painting black dots that look like lesions or painted red dots that look like blood ? Second, it is very easy for the clinical expert to check if the image is tampered in the ways presented in the paper. So, there is a good way to make the CAD system robust to attacks. Why study the robustness of the networks in the specific setting of physical attacks, and not in the generic context of the quality (generalizability, regularization) of the mapping ? This is my main concern with the paper. This is echoed by Reviewer 1 as well.\n\nThe theme of the Methods section \"We place small artifacts such as dots and lines in a region around the skin lesion and\nevaluate, whether the DL diagnosis changes\" doesn't seem comprehensive because it restricts to very specific kinds of perturbations that don't seem to be useful to study in practice; certainly no more useful that studying the general problem of reliability of networks.\n\nThe size of the proposed dataset PADv1 is too small for evaluation to be useful. A good dataset will have at least a few thousand images. This issue has also been raised by a reviewer.\n\nThe paper only throws light on a problem (which isn't new in the general context of robustness of neural nets), but offers no solution.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Byl6W7WeeN¬eId=rklVpfUBLN"], "decision": "Reject"} |