Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "HJl6fpzgg4", "submission_url": "https://openreview.net/forum?id=HJl6fpzgg4", "submission_content": {"title": "Collaborative slide screening for the diagnosis of breast cancer metastases in lymph nodes", "authors": ["Gianluca Gerard", "Marco Piastra"], "authorids": ["gianluca.gerard01@universitadipavia.it", "marco.piastra@unipv.it"], "keywords": ["fully convolutional network", "few-shot learning", "meta-learning", "sparse annotation", "lymph nodes", "camelyon16", "histopathological images"], "TL;DR": "Using collaborative fully convolutional networks for screening Whole Slide Images in histopathological diagnosis", "abstract": "In this paper we assess the viability of applying a few-shot algorithm to the segmentation of Whole Slide Images (WSI) for human histopathology. Our ultimate goal is to design a deep network that could screen large sets of WSIs of sentinel lymph-nodes by segmenting out areas with possible lesions. Such network should also be able to modify its behavior from a limited set of examples, so that a pathologist could tune its output to specific diagnostic pipelines and clinical practices.\nIn contrast, 'classical' supervised techniques have found limited applicability in this respect, since their output cannot be adapted unless through extensive retraining.\nThe novel approach to the task of segmenting biological images presented here is based on guided networks, which can segment a query image by integrating a support set of sparsely annotated images which can also be extended at run time.\nIn this work, we compare the segmentation performances obtained with guided networks to those obtained with a Fully Convolutional Network, based on fully supervised training. Comparative experiments were conducted on the public Camelyon16 dataset; our preliminary results are encouraging and show that the network architecture proposed is competitive for the task described.", "pdf": "/pdf/f3ab9ab404a7a661098337feaf798baa346c4e05.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "gerard|collaborative_slide_screening_for_the_diagnosis_of_breast_cancer_metastases_in_lymph_nodes"}, "submission_cdate": 1544723733446, "submission_tcdate": 1544723733446, "submission_tmdate": 1545069824249, "submission_ddate": null, "review_id": ["BJeXcBiu7V", "Syxjffvo74", "BklLd1bqmV"], "review_url": ["https://openreview.net/forum?id=HJl6fpzgg4¬eId=BJeXcBiu7V", "https://openreview.net/forum?id=HJl6fpzgg4¬eId=Syxjffvo74", "https://openreview.net/forum?id=HJl6fpzgg4¬eId=BklLd1bqmV"], "review_cdate": [1548428682699, 1548608019396, 1548517229816], "review_tcdate": [1548428682699, 1548608019396, 1548517229816], "review_tmdate": [1549305507432, 1548856742587, 1548856733950], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper81/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper81/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper81/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJl6fpzgg4", "HJl6fpzgg4", "HJl6fpzgg4"], "review_content": [{"pros": "The paper presents an application of a recent few-shot learning algorithm (Guided Network) to the problem of lymph node segmentation in histopathological images. \n", "cons": "\n- no methodological novelty; the presented method is an application of an existing work (\u201cGuided Network\u201d) to a lymph node segmentation dataset.\n\n- no error bars are provided. For example, the model trained with (5 shots, 10 points) annotations performs better than the model trained with (1 shot, dense) annotations, which seems strange given that dot annotations are generated from dense annotations.\n\n- the details of optimisation are incomplete e.g. optimiser, learning rate, etc. \n\n- two of the requirements stipulated in the introduction are not empirically validated; 1) \u201cbe collaborative and easy to use\u201d; 2) \u201crequiring minimal maintenance.\u201d \n\nOverall, despite the well-communicated relevance of the topic, the paper lacks both methodological novelty and empirical validation of its utility in the considered application. \n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This paper addresses the problem of scarcely available dense manual annotations for supervised learning in histopathology image segmentation, and propose using parse annotations in a framework based on few-shot learning.\nThe problem tackled by this paper is very relevant, because manual annotations are time consuming and very expensive, specially when pathologists have to be involved, whereas sparse annotations are easier to make.\n\nThe title suggests that a framework for collaborative annotations, possibly by involving multiple users, is presented, which is a novel approach in the context of the Camelyon challenge and to metastases detection in lymph-nodes in general, to the best of my knowledge.", "cons": "The paper lacks clarity in the order and in the details in which components are introduced and applied, and several parts of the paper are difficult to understand.\nFurthermore, it is not clear where the \"collaborative\" part of the whole methodology takes place.\nThe only point where this is mentioned is in the section about \"late fusion\", but I do not understand how new annotations added during inference can make the model collaborative. What would be a good use case scenario? This should be explained in the paper.\n\nAdditional comments:\n\n* In section 3.1, a training set of s samples and a test set of t samples are introduced. What is the size of s and t, and what should be their order of magnitude to make this method effective? I guess t << s, otherwise one could just rely on dense annotations from the test set and use them for training. Experiments with different ratios t:s should be performed\n\n* What is the method actually tested on? What is called test set seems to be used during training, so there should be another set used for the actual validation of the method, but it's not introduced.\n\n* Figure 2 is not clear. I think the direction of the arrow between g and m is wrong. Furthermore, components are used here that are described later in the paper, which makes it very difficult to understand. Those components are actually introduced in the section about experiments, instead of in the method section.\n\n* Patches are labeled as lesion \"if at least one pixel in the center window of size 224x224 was annotated as lesion\". Is this the central pixel or any pixel in the patch? The way it's written it seems that it is any pixel in the patch, which I don't think is a good choice.\n\n* \"Bilinear interpolation for downsampling\" sounds a bit odd.\n\n* Table 1 and Table 2 show results with sparse and dense annotations, but it's not clear if dense refers to FCN-32. If it does, why are the numbers in the text different from the ones in the table?\n", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper presents a deep network that could screen large set of WSIs of sentinel lymph nodes by segmenting out the areas with possible lesions. It is hypothesized that such network can even help to correct and adapt its behavior from a limited set of examples, which is an important limitation in medical AI applications today. \n\nThe idea of guidance is promising (however prior in DL is not a new idea), and combination of the guidance within the episode learning can be strong once its dynamics is shown in relatively more difficult problem where its additive value is proved. ", "cons": "-- Almost entire paper is dedicated to describe the late-fusion technique and unfortunately there is very little description of early fusion. It\u2019s not clear how early fusion variation model is trained. More information will address the ambiguity.\n\n-- The function \"f\" (which integrates the representation into the second network) is not clearly defined. \nIt would be necessary to clarify how this representation is integrated into the second network?\n\n--Although authors think the results are encoring, the overall results do not seem promising, and there is a lack of comparison with other methods. Hard to understand where this method stands compare to other available ones.\n\n--There is a lack of valid explanations/ justification on why the results for dense labels worse than that of 5 or 10 points ? \nIsn't that dense labels should be the ideal case of the sparse annotations? \n\n-- if human annotations and network output are overlaid, it could be easier to see where the mistakes are, but in the current form, it is hard to analyze images (see figure 3)\n\n--technical novelty is questionable, because few shot learning model is taken from Rarely et al 2018, and applied to histopathology images, and results are \"not\" presented in an elaborative way, therefore the application novelty remains questionable due to unjustified claims. ", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["SyxO7xWIVV", "Sylz5Z-8EV", "HJeOsfWLV4", "HkgzN4WIV4"], "comment_cdate": [1549303840350, 1549304202214, 1549304479614, 1549304873528], "comment_tcdate": [1549303840350, 1549304202214, 1549304479614, 1549304873528], "comment_tmdate": [1555946033884, 1555946033665, 1555946033455, 1555946033231], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper81/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper81/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper81/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper81/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Clarifications on the target scenario", "comment": "We agree that, in its present form, the paper does not clarify the target scenario of collaboration between the system and the human experts:\n\n* the system proposed should be able to segment automatically those regions, if any, in each WSI that might contain lesion(s) and that are worth inspecting by a human pathologist;\n\n* a classical, supervised training would be impractical for such purpose since it would require the upfront availability of a large dataset of annotated WSIs and, even in such case, it might prove to be inflexible in the results produced, thus entailing acceptance problems by the human experts;\n\n* in contrast, few-shots learning could provide a much better technical alternative.\n\nThe few-shots learning approach presented has two main advantages:\n\n* the behavior of the system, in terms of the segmentation produced, can be corrected on-the-fly (i.e. without complete retraining) by providing a set of support images: such correction set will contain images in which the original segmentations produced automatically will have been corrected by human experts \n\n* the annotations, both in the training and in the correction set, will be defined by human experts as a set of individual pixels (sparse annotations) instead of polyline-enclosed regions (dense annotation)\n\nIn such perspective it is true that the work presented is preliminary. Yet, we believe it is important to show, as a first step that:\n\n* the sparse annotation can be equally effective as dense annotation for purpose of supervised training;\n\n* the late-fusion architecture, which is the most promising for the above scenario, is not any least effective than a more \u2018classical\u2019 FCN-based segmentation method.\n\nAlthough it is true that the few-shots architecture adopted was adapted from the one originally published by the authors in reference (see Rakelly et al. 2018), the work presented proves that the method is indeed applicable to histopathological WSIs and not just generic, real-world images. \nClearly, we would be pleased to improve on the aspects above, should we have the chance to revise our work."}, {"title": "Clarifications", "comment": "Please for the target scenario and for what we mean by collaborative system refer to the general comments posted above.\nReplies and clarifications to the additional comments follow.\n\n* As referenced in section 3.1, \u201cIn few-shot learning [regime], s is small\". Section 4 shows that we have trained successfully with s = 1 and s = 5. This is typical in few shot learning, the case with s = 1 is also known as one-shot learning. There is no guidance on how large the query set should be, this is the set on which we make the inference and therefore, if the algorithm is used to generate the segmentation of one or multiple WSIs t will be as large as the number of patches extracted from that set of WSIs.\n\n* We have renamed the support and query set to more familiar terms. However we agree that this can be confusing because what is called the query set (that we have also dubbed test set), is the set of patches that our algorithm, during inference, segments and that we compare with the ground truths to measure the algorithm performance. In the final version of the paper we will revert to the standard definition of support and query set only to avoid any confusion.\n\n* Thank you for noticing the mistake in Figure 2. The direction of the arrow between g and m should be reversed. With this figure our goal was to introduce the network architecture earlier and provide a high level overview of it in section 3.2 while sharing the complete details of all the components in section 3.3.3.\n\n* We have marked a patch as lesion if any pixel in the center region of the patch is classified as lesion. This approach is taken from Liu et al. (see section 2 \u201cWe label a patch as tumor if at least one pixel in the center region is annotated as tumor\u201d). Their approach, to the best of our knowledge, has been validated by obtaining the top performance in the Camelyon16 challenge.\n\n* We use bilinear interpolation to downsample the annotations. These, in fact, must have the same dimension of the output of the VGG-16 backbone. We will rephrase in the final paper to make it clearer.\n\n* The FCN-32s results are not reported in either table. Table 1 has the results obtained with different s and different number of annotations for the \u201clate fusion\u201d (best performing) algorithm. Table 2 has the corresponding results for the \u201cearly fusion\u201d algorithm. FCN, instead, is a single algorithm and it can only work with dense annotations. The results obtained with FCN-32s are only reported in the body of the paper. We will add a note to make clearer that the FCN-32s results are not included in either table."}, {"title": "Clarifications", "comment": "* Due to length constraints (we were explicitly advised not to exceed 10 pages), we didn\u2019t manage to include details around the early fusion version of the algorithm. In the final paper we will rebalance the content to make a reference to the differences between the two and we will provide links to the relevant references.\n\n* We agree with your comment about function f, and we will provide further details on f in the final version of the paper. f is a function that can be learned end-to-end. In the current implementation it is a two-layer convolutional neural network followed by a bilinear interpolator for upsampling. Function f takes as input the concatenation of the query features obtained from the backbone network \\phi with the latent representation z tiled, if necessary, to the spatial dimensions of the query. \n\n* Due to time constraints we couldn\u2019t conduct a comparison with other segmentation networks commonly used for biomedical images, such as U-net. However, to the best of our knowledge, this is the first method that provide at least comparable results to a common approach, such as FCN, but relying only on sparse annotations. This is, per se, a good result as it addresses the time consuming phase of manual annotations.\n\n* For late and early fusion and 1 shot the dense results are better across the various metrics. For 5 shots and late fusion the dense annotation performs slightly worse than 10 points annotation for the tissue accuracy metric and worse than 5 points annotation on the lesion accuracy. This might be justified by considering that sometimes dense annotations are not accurate at the borders. Using them densely can introduce noise during the network training and degrade the accuracy. Sampling few points from such annotations, instead, because the randomly chosen annotation points are unlikely to fall at the borders, can sometimes improve the performance of the sparsely annotated results over the densely annotated ones.\n\n* Good suggestion on superimposing the human annotations with the network output. We will superimpose the images in the final paper.\n\nThese are indeed preliminary results, but we think that having obtained results comparable to a well established method by just using sparse annotations and with the prospect of having a tool that could adapt its behavior with a limited training (support) set was important to be shared with a wider community of researchers. See also our initial comments above.\n"}, {"title": "Clarifications", "comment": "* To the best of our knowledge this is the first time guided networks are applied to biomedical images and we believe this should be shared with the wider research community as they perform on par or better than established segmentation algorithms. Although this is preliminary work we have already demonstrated with our paper the viability of applying sparse annotations to the segmentation of biomedical images. This results already provides a significant saving in the process of creating the training set. \n\n* Due to time constraints and extensive training time we couldn\u2019t run multiple iterations of the experiment to add error bars to the results presented.\n\n* On the experimental results, please consider: it is expected that 1 shot runs, where we are only providing one single image as support set, perform worse than 5 shots run where the support set contains 5 images. Further context on this matter is provided in our reply to a similar comment made by the previous reviewer.\n\n* Concerning the optimisation details we will include those in a new revision of the paper. As an optimizer we used Stochastic Gradient Descent, as a learning rate we used 1E-5, with momentum 0.99 and weight decay 0.0005. The weights are frozen at the iteration which provides the best accuracy across 60000 iterations (sampled every 4000 iterations). As a loss function we used Binary Cross Entropy.\n\nPlease, also refer to our initial comments above for further clarity."}], "comment_replyto": ["HJl6fpzgg4", "Syxjffvo74", "BklLd1bqmV", "BJeXcBiu7V"], "comment_url": ["https://openreview.net/forum?id=HJl6fpzgg4¬eId=SyxO7xWIVV", "https://openreview.net/forum?id=HJl6fpzgg4¬eId=Sylz5Z-8EV", "https://openreview.net/forum?id=HJl6fpzgg4¬eId=HJeOsfWLV4", "https://openreview.net/forum?id=HJl6fpzgg4¬eId=HkgzN4WIV4"], "meta_review_cdate": 1551356554698, "meta_review_tcdate": 1551356554698, "meta_review_tmdate": 1551703107030, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "There is a consensus among Reviewers about the recommendation for this paper that I support. The authors tried to explain their decisions, and it is appreciated but it is not sufficient for the acceptance of the paper.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HJl6fpzgg4¬eId=Hkx7qzLBI4"], "decision": "Reject"} |