AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_Sygt37F21E.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.7 kB
{"forum": "Sygt37F21E", "submission_url": "https://openreview.net/forum?id=Sygt37F21E", "submission_content": {"title": "Semantic segmentation of cell nuclei and cytoplasms in microscopy images", "authors": ["Christian Eschen", "S\u00f8ren Blaaberg", "Ole Winther", "Rasmus Reinhold Paulsen"], "authorids": ["s123656@student.dtu.dk", "sbl@chemometec.com", "olwi@dtu.dk", "rapa@dtu.dk"], "keywords": ["Fully convolutional neural networks", "semantic segmentation", "deep learning", "microscopy imaging", "fluorescent imaging"], "TL;DR": "Semantic segmentation of cell nuclei and cytoplasms in microscopy images", "abstract": "Microscopy imaging of cell nuclei and cytoplasms is a powerfull technique for research, diagnosis and drug discovery. However, the use of fluorescent microscopy imaging for cell nuclei and cytoplasms labeling is time consuming and inconvenient for several reasons,thus there is a lack of fast and accurate methods for prediction of fluorescence cell nuclei and cytoplasms from bright-field microscopy imaging. We present a method for labeling bright-field images using convolutional neural networks. We investigate different convolutional neural network architectures for cell nuclei and cytoplasms prediction. Using the DeepLabv3+, we found relative impressive results with a 5-fold cross validation dice coefficient equal to 0.9503 as well as meaningful segmentation maps. This work shows proof of concept regarding microscopy fluorescence labeling of cell nuclei and cytoplasms using bright-field images", "code of conduct": "I have read and accept the code of conduct.", "pdf": "/pdf/2eb29414653a452bdd4342c1cced5bfdc8ebe95b.pdf", "paperhash": "eschen|semantic_segmentation_of_cell_nuclei_and_cytoplasms_in_microscopy_images"}, "submission_cdate": 1544487857367, "submission_tcdate": 1544487857367, "submission_tmdate": 1545069847098, "submission_ddate": null, "review_id": ["B1gxYr0nME", "rJgzmojVfE", "SJgrlAf1N4"], "review_url": ["https://openreview.net/forum?id=Sygt37F21E&noteId=B1gxYr0nME", "https://openreview.net/forum?id=Sygt37F21E&noteId=rJgzmojVfE", "https://openreview.net/forum?id=Sygt37F21E&noteId=SJgrlAf1N4"], "review_cdate": [1547654519567, 1547119385957, 1548852717395], "review_tcdate": [1547654519567, 1547119385957, 1548852717395], "review_tmdate": [1548856709561, 1548856703189, 1548856678590], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper6/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper6/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper6/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Sygt37F21E", "Sygt37F21E", "Sygt37F21E"], "review_content": [{"pros": "The manuscript describes semantic segmentation in microscopic images. The work consists of comparative evaluation of semantic segmentation methods using three state-of-the-art convolutional neural networks, namely U-Net, Tiramisu and Deeplabv3+. The manuscript is well organised, clearly written and has good motivation. The work mentions a custom U-Net inspired by original U-Net, however, the design process and differences from latter are not clear from the description.", "cons": "State-of-the-art methods are compared for semantic segmentation in microscopic images. A custom U-Net is applied but not clearly discussed. The application area is interesting but the experiments are performed on limited datasets and cross-validation is only performed for one method. One evaluation metric (dice) is used, based on which the performance evaluation is not conclusive. The following problems need to be addressed by the authors.\n1. The authors propose a custom U-Net. They could specify why this architecture was used and how is it superior to the original U-Net? What was the quantitative difference in their performance? \n2. The comparative evaluation seems incomplete. Why was only one method cross-validated? The custom U-Net was not compared to the original U-Net. Other evaluation metrics could be used as the current evaluation isn't conclusive. The standard deviation of cross-validations could also be specified.\n3.The dataset described is limited to 170 images but the deep architectures require learning from large-scale datasets. The authors mention augmentation, but the size of augmented data is not specified. Though the authors mention that learning curves did not show signs of overfitting; an example of such a curve could be illustrated and discussed.\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The authors claim that it is possible to obtain equivalent information to the one given by fluorescence microscopy (images where nuclei and cytoplasms are labelled with different fluorochromes and therefore, it is possible to distinguish almost uniquely both parts of the cells) with bright-field microscopy data. \nThe problem at hands is very interesting for the improvement of biological results as fluorochromes are sometimes toxic and might change the metabolism and behaviour of cells. \nBesides, each of the trained convolutional neural networks (U-net, Tiramisu and Deeplabv3+) results in a high Dice coefficient (0.91, 0.93 and 0.94 respectively), showing the great potential of the employed methods. \n", "cons": "The methods used in this paper are already published and widely discussed convolutional neural network architectures (U-net, Tiramisu, Deeplabv+3), that show to work very well in this case. However, the writing style is so messy that it does not make clear the process followed by the authors to obtain the presented results. There is also a large number of format errors and the use of English should be reviewed:\n- Author names and institutions are missed!\n- Some references are missed along the text and appear as \u2018#\u2019(in the first paragraph for instance)\n- Format errors in the bibliography: \u201cCVPR\u201d is written as \u201cCvpr\u201d, \u201cComputer Vision and Pattern Recognition Workshops (CVPRW)\u201d, \u201cProceedings of the IEEE conference on computer vision and pattern recognition\u201c\n- \u201cyielding different label images.\u201d \u2192 yielding different images FOR EACH LABEL\n- \u201cwith some of blocks\u201d \u2192 some of THE blocks\n- \u201cThe encoding phase consist\u2026 \u201d \u2192 consistS\n- \u201cwithout resampling and to, patches\u201d \u2192 ???\nand so on.\n\nIn terms of clarity, I would highlight the following points:\n- The abstract does not reflect clearly what is the main motivation of this work and what is exactly the concept the authors want to prove: The use of image processing methods for the prediction of cell nuclei and cytoplasms from different (less toxic, and less expensive in terms of work) microscopy modality images, such as bright field microscopy.\n- The process to build the ground truth is not explained.\n- The software used for the implementation of the networks is not specified. \n- The data for cross-validated training of Deeplabv+3 was split into 136 and 44 images, while authors only had 170 images. Therefore, some of the images must be included in both training and validation datasets. Might this be a reason why the reported accuracy measure in Table 1 is higher than the one for Deeplabv+3 without cross validation?\n- There are some questions that should be detailed along the manuscript: Why did you decide to use cross-validation only for Deeplabv+3? Equation 1: What is the value range of k?\n- Font size in Figure 1 is too small.\n\nThe results show that it might be possible to segment cell nuclei and cytoplasms from brightfield microscopy. In order to prove it, I would say that the data should be more heterogeneous: different cell lines and microscopy devices. \n\nAir bubbles are quite common in brightfield microscopy. Also, a previous step to remove this part of the images (or the whole image) might slow down the process or even introduce some bias in the cases in which discarding of air bubbles is not correct. Do you think that a machine learning method could learn to discard the pixels belonging to air bubbles and classify them as background for example? How would you evaluate it (in fluorescence microscopy bubbles are not a problem as the fluorescent signal is recorded in any case, and therefore in the ground truth these pixels will not appear as background)?\n\n\n\n", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper proposed a method for labeling bright-field images using three types of convolutional neural networks, i.e., U-Net, Tiramisu model, and Deeplabv+3 model. The experiments were performed on 170 2D images.\n", "cons": "1. The main concern is the novelty of the proposed method, since the authors simply tested three types of CNN for segmentation and no new methodology is proposed here. Why using these three networks and not the others?\n\n2. Besides, the paper is hard to follow. For instance, in the experimental setting, there are two similar sentences: \u201dThe data was split into 153 training images and 17 validation images\u201d and \u201cThe data was randomly split into 136 training images and 44 for validation.\u201d From the perspective of a reader, it is not clear at all. \n\n3. The same issue occurs in Table 1 and Fig. 2, where no clear explanation to present the difference between the method \u201cDeeplabv3+\u201d and \u201cDeeplabv3+ (cross validation)\u201d.", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1551356567820, "meta_review_tcdate": 1551356567820, "meta_review_tmdate": 1551703121279, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "While the results look visually (and numerically) impressive, the three reviewers agree (and so do I) that this article does not go beyond applying existing techniques to a particular dataset, which limited comparison with other methods / datasets. \n\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Sygt37F21E&noteId=HklliG8H8E"], "decision": "Reject"}