{"forum": "SkxccsWxxE", "submission_url": "https://openreview.net/forum?id=SkxccsWxxE", "submission_content": {"title": "Sparse Structured Prediction for Semantic Edge Detection in Medical Images", "authors": ["Lasse Hansen", "Mattias P. Heinrich"], "authorids": ["hansen@imi.uni-luebeck.de", "heinrich@imi.uni-luebeck.de"], "keywords": ["sparsity", "structured prediction", "edge detection", "deep learning"], "TL;DR": "Processing of image patches on a graph generated from sparse sampling locations to recover dense predictions.", "abstract": "In medical image analysis most state-of-the-art methods rely on deep neural networks with learned convolutional filters. For pixel-level tasks, e.g. multi-class segmentation, approaches build upon UNet-like encoder-decoder architectures show impressive results. However, at the same time, grid-based models often process images unnecessarily dense introducing large time and memory requirements. Therefore it is still a challenging problem to deploy recent methods in the clinical setting. Evaluating images on only a limited number of locations has the potential to overcome those limitations and may also enable the acquisition of medical images using adaptive sparse sampling, which could substantially reduce scan times and radiation doses.\n\nIn this work we investigate the problem of semantic edge detection in CT and X-ray images from sparse sampling locations. We propose a deep learning architecture that comprises of two parts: 1) a lightweight fully-convolutional CNN to extract informative sampling points and 2) our novel sparse structured prediction net (SSPNet). The SSPNet processes image patches on a graph generated from the sampled locations and outputs semantic edge activations for each patch which are accumulated in an array via a weighted voting scheme to recover a dense prediction. We conduct several ablation experiments for our network on a dataset consisting of 10 abdominal CT slices from VISCERAL and evaluate its performance against a baseline UNet on the JSRT database of chest x-rays.", "pdf": "/pdf/aaae0c371f9d250d3576671370fb61da640d55a1.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "hansen|sparse_structured_prediction_for_semantic_edge_detection_in_medical_images", "_bibtex": "@inproceedings{hansen:MIDLFull2019a,\ntitle={Sparse Structured Prediction for Semantic Edge Detection in Medical Images},\nauthor={Hansen, Lasse and Heinrich, Mattias P.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=SkxccsWxxE},\nabstract={In medical image analysis most state-of-the-art methods rely on deep neural networks with learned convolutional filters. For pixel-level tasks, e.g. multi-class segmentation, approaches build upon UNet-like encoder-decoder architectures show impressive results. However, at the same time, grid-based models often process images unnecessarily dense introducing large time and memory requirements. Therefore it is still a challenging problem to deploy recent methods in the clinical setting. Evaluating images on only a limited number of locations has the potential to overcome those limitations and may also enable the acquisition of medical images using adaptive sparse sampling, which could substantially reduce scan times and radiation doses.\n\nIn this work we investigate the problem of semantic edge detection in CT and X-ray images from sparse sampling locations. We propose a deep learning architecture that comprises of two parts: 1) a lightweight fully-convolutional CNN to extract informative sampling points and 2) our novel sparse structured prediction net (SSPNet). The SSPNet processes image patches on a graph generated from the sampled locations and outputs semantic edge activations for each patch which are accumulated in an array via a weighted voting scheme to recover a dense prediction. We conduct several ablation experiments for our network on a dataset consisting of 10 abdominal CT slices from VISCERAL and evaluate its performance against a baseline UNet on the JSRT database of chest x-rays.},\n}"}, "submission_cdate": 1544719250462, "submission_tcdate": 1544719250462, "submission_tmdate": 1561397408388, "submission_ddate": null, "review_id": ["S1l4u1a_m4", "rkxjAyO3QE", "B1gWGkX5mN"], "review_url": ["https://openreview.net/forum?id=SkxccsWxxE¬eId=S1l4u1a_m4", "https://openreview.net/forum?id=SkxccsWxxE¬eId=rkxjAyO3QE", "https://openreview.net/forum?id=SkxccsWxxE¬eId=B1gWGkX5mN"], "review_cdate": [1548435308456, 1548677074735, 1548525320814], "review_tcdate": [1548435308456, 1548677074735, 1548525320814], "review_tmdate": [1548856756420, 1548856754247, 1548856735095], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper66/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper66/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper66/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SkxccsWxxE", "SkxccsWxxE", "SkxccsWxxE"], "review_content": [{"pros": "The submission addresses very relevant problems in CNN based medical image analysis. Computation time and memory constraint are important issues. The proposed framework deals with computation time and memory constraints via sparse image analysis and it furthermore enables to incorporate more context (another important issue in image analysis) in the network via graph CNNs.\n\nThe results are impressive with quite a big improvement over U-nets, while the proposed architecture has much less parameters. I do have some concerns however regarding the experiment, which may bias the experiment in favor of the proposed approach (see comment below). Nevertheless, I very much appreciate the creativity of this paper and the ambition to solve the above mentioned challenges in medical image analysis.\n\nI recommend accept conditional to minor changes/clarifications.", "cons": "I put all my minor and major comments and suggestions in this section.\n\nThe paper puts a lot of emphasis on context aggregation, however, to me it is not completely clear how this is achieved. The authors mention that they rely on graph CNNs on which they perform pooling (via graph diffusions), but there is no explanation of how this contributes to a higher contextual \u201cunderstanding\u201d of the data. I would appreciate it if this were somewhere in the document explained (preferably in the introduction).\n\nA related issue is the following. In the conclusion the following is mentioned: \u201cwe showed that GCNNs can successfully mimic \u2026 UNet-like encoder-decoder architectures of pooling global context information\u201d. I think that this sentence in a way down-plays your own work. I don\u2019t think it was the goal to mimic UNets, but I also don\u2019t directly see how it mimics the context aggregation features of UNet type architectures other than that there is some global information analysis going on. The UNets are hierarchical in nature (it is in this sense not unique, there are many other multi-scale analysis approaches in MedIA) and I don\u2019t think this hierarchical nature is apparent in this paper. As far as I can tell, the method works \u201conly\u201d on two levels (though quite effectively): at pixel level (structure head) and global level (graph based semantic head).\n\nFinally, regarding context aggregation. Could you explain in what way context is exploited. I have a feeling that the semantic head sort of has a function of telling the structure head \u201chey, I am pretty confident that you can correctly predict this class in this region, but I am not going to let you contribute to the segmentation in this region\u201d, but I\u2019m not fully sure that this is the idea behind this splitting. In a na\u00efve way, you could also just let the semantic head spit out confidence scores of 1 for each location and each class (it is then a standard Hough voting). It would be nice if the motivation for the design was better explained in the paper.\n\nIn the related work section of the introduction the transition to work on graphs comes a bit out of the blue. Up to that point there is no mention that the proposed work relies on graph CNNs. Perhaps the introduction could be improved by mentioning up-front the general idea of the paper?\n\nDue to some missing details in the paper on graph CNNs I went to study the references of this paper and found that the embedding of some these references in the introduction is not fully correct. It is suggested that the works of Henaff et al. [H] and Kipf and Welling [KW] aim for local support of spectral filters in response to the work by Bruna et al. [B], which supposedly doesn\u2019t have this property. In [B] the filters are indeed localized (localization is obtained through smoothness in the spectral domain) and the main contribution of [H] is not to enable local support (they do rely on results of [B]) but rather to describe theory for constructing graphs when they are not yet defined a priori, and they additionally nicely present/summarize the framework for graph-CNNs. A large part in [KW] is indeed concerned with locality of the spectral filters. Instead of relying on splines (as is done in [B] and [H]), [KW] rely truncated Chebyshev polynomials based on the work of Hammond et al. (2011), and describe clear properties of this approach regarding the support size of the graph filters.\n\nThere is a serious typo in the first equation. In the definition of the adjacency matrix the 2 \\sigma^2 should be within the exponential. If it is outside, as it is now, the sigma does not have any effect on the graph other than scaling all weights (this scaling is for example undone in the Laplace operator D^{-1}A and is also undone by simple scaling of the graph convolution kernels). \n\nSmall suggestion: In the second equation either the \\alpha or \\beta can be omitted (only one parameter is sufficient to balance between the two terms). \n\nStart of section 2.2.: Could you explain why you designed the network in such a way (parallel structure and semantic head). In principle you don\u2019t need the semantic head for Hough voting, but it does seem to improve the results. Some intuition would be appreciated.\n\nOn page 5 you describe the pooling of features on a graph. At first it was not apparent to my how this is done, but I believe the approach is a sort of equivalent of average pooling in classical CNNs (except for the down-sampling part, which is not done in this work). Perhaps this link, or some intuition, could be provided in the paper.\n\nRegarding the diffusion process I would personally find an intuitive explanation more important that trying to describe the mathematics of it, especially because I have the feeling that the provided matrix L is incorrect (see also your own work Hansen et al. 2018 for definitions of L). The part about the \u201cdiffusion matrix\u201d on page 5 is unclear: You provide a matrix which I believe is usually referred to as \u201ctransition matrix\u201d in Brownian motion processes. This matrix could be used to define a Laplacian operator L=I \u2013 D^{-1}A, which in turn can be used to describe a diffusion process (p\uf0dfp+L.p). I have several problems with this paragraph: 1. There is probably a typo (a missing identity matrix) and 2. The section does not describe how this matrix is used, 3. nor does it provide intuition (e.g. you pool features by means of graph smoothing, similar to average pooling in standard CNNs). I had to dive into spectral theory on graphs to understand what you meant in this section.\n\nIn the experimental setup (on page 6) you describe that class weighting was not applied. I think this choice could have a quite severe effect on you experiments. In neither of the networks in this paper you deal with the disbalance of labels. The fact that the U-net underperforms could be due to the lack of balancing the data/losses. See also figure 3, where the small bladder is completely ignored by the U-net. Of course it could also be that your method is more robust against this disbalance (possibly due to the sampling strategy). This would indeed be a good thing, but it is not addressed in the paper. For me this leaves the impression that I cannot really tell if your method is intrinsically better, or that it is just less sensitive to unbalanced data.\n\nThe sigma parameters is set very small (0.1). This would mean that the Gaussians decay within a pixel distance. Is there any connectivity left then? Or do you use normalized coordinates?\n\nSmall typo in results section: \u201cyields a higher score as all UNet\u201d , \u201cas\u201d->\u201dthan\u201d.\n\nFinally some additional questions:\n\nIs the computation time indeed reduced compared to e.g. the U-Nets?\n\nWhat kind of graph-CNN is used (you mentioned in the introduction some variations but not which one you actually use, I suppose the same \u201cconvolution\u201d type as the one in Kipf and Welling is used)?\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "* This paper is clearly written about purpose, methodology, and results.\n* This paper presents a novel method using the CNN for extracting sampling locations and the patch-based network with GCNN semantic head and CNN structure head (i.e., the proposed sparse structured prediction net), to solve the challenging problem of edge detection of multiple organs from medical images.\n* From the experiments, the authors showed that the proposed method, which utilizes the proposed network with the sampling points, outperformed conventional FCN (U-net)-based approaches.", "cons": "* In terms of parameters for network training in validation experiments, there are some parameters in which the reasons why the authors set the values are not explained sufficiently.\n\nComments\n- In terms of the losses for network training, the authors set the control parameters of BCE and DICE losses to 0.001 and 1, respectively. Please describe why the values was set to decrease the effect of BCE on the loss computation. Also, the authors should describe why class weighting was not applied to the class-specific loss.\n- In Figs. 3 and 4, the qualitative results and the quantitative results should be divided into different figure and table, respectively. Also, I suggest to add the legend of anatomical structures in the figures instead of describing them in the figure titles.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The article proposes a novel structured prediction method for semantic edge detection in CT and X-ray images. A fully convolutional network extracts sparse sample locations from original input images. Sparse prediction network takes patches from these sampling locations as input. This SSPNet contains a CNN path to produce edge features and a GCNN path to weight these patches. Hough voting is used to accumulate the predictions to obtain dense semantic edge map. The authors evaluate the method on two datasets against standard baselines. The results from the experiment indicate a significant improvement over the baseline methods. \n\nPros:\n1. A novel deep learning method for pixel level prediction by processing image data on sparse and irregular instead of dense grids.\n2. Few sparse samples for structure prediction reduce the memory and time limitations for edge detection tasks in medical images\n3. The experimental results evaluate the effectiveness of the work on two datasets.\n4. The proposed model has 2.5 times fewer learnable parameters than baseline(UNet-L), yet performs 1 and 1.6 percent better on two datasets.\n5. Figure 1. and Figure 2. helps to understand the article better.", "cons": "Minor Comments\n1. How does the Fully convolutional CNN influence the prediction of SSPNet? Can similar performance be achieved with fewer samples? Or Can other sample selection mechanism make any difference?\n2. Would increasing the training samples or doing data augmentation help the baseline methods (UNet)? \n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["S1lclKSjNE", "rkg-iqBi4N", "S1ligirsNV", "B1gtznBsVV"], "comment_cdate": [1549650162413, 1549650584854, 1549650675442, 1549650960917], "comment_tcdate": [1549650162413, 1549650584854, 1549650675442, 1549650960917], "comment_tmdate": [1555946004664, 1555946004450, 1555946004232, 1555946004018], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper66/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper66/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper66/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper66/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Shared Response to All Reviewers", "comment": "First, we would like to thank all reviewers for their time and their helpful feedback. In this shared response we try to summarize positive comments as well as concerns mentioned by more than one reviewer. Note that an individual comment is added where needed.\n\nWe appreciate that all reviewers commented positive on the novelty and the potential interest of our approach for the MIDL community.\n\nA legitimate question was raised by the reviewers whether the use of an unweighted cross entropy loss could have biased the experiments in favor of our method. We did not apply class weighting in our initial submission, because we found no benefit for our approach. Understandably, this does not directly imply that it should be omitted for the U-Net baselines. We therefore repeated the experiments for the U-Nets with class weighting. For the first experiment on VISCERAL CT slices, the results improve to ODS values of .769 (UNet-S), .791 (UNet-M) and .834 (UNet-L). This means that, besides being more robust against class imbalance (also mentioned by reviewer #3), our best model (ODS of .827) still performs almost on par with the UNet-L (2.5x more parameters) and better than the UNet-S and UNet-M. For the second experiment on JSRT chest X-rays class weighting does not improve on the results of the baseline UNets (ODS of .874 (UNet-S), .878 (UNet-M), .884 (UNet-L) and .900 (ours)).\n\nOf course, we will correct remaining typos (especially in our first equation) addressed by reviewers in our final submission.\n"}, {"title": "Response to Reviewer #2", "comment": "We thank the reviewer for the positive comments. Please do also note the shared response to all reviewers above.\n\nTo answer the question about parameter selection for our loss function: We performed a hyperparameter search for our baseline method and then kept these parameters fixed. Additionally, the chosen parameters also correspond to the values used in the original publication of the proposed loss, see [Deng et al. 2018]. \n"}, {"title": "Response to Reviewer #1", "comment": "We thank the reviewer for the positive comments and further questions. Please do also note the shared response to all reviewers above.\n\nIn this work, we did not further investigate on different sampling strategies for our approach. But, as the reviewer rightfully mentions, this is clearly of high interest and we think it is a good topic for future research, especially when incorporating our approach in an end-to-end learning framework.\n"}, {"title": "Response to Reviewer #3", "comment": "We are very grateful to the reviewer for her/his very detailed feedback, helpful suggestions as well as interesting discussion of several topics addressed in our work. At this point we would also like to refer to our shared response to all reviewers above, as it covers the concerns of the reviewer regarding class weighting in our loss function.\n\nWe think, the main suggestion of the reviewer is to describe our general idea more clearly up-front. This includes the separation of our network in two heads, one working at pixel-level and one at a global level (not hierarchically as the reviewer rightfully mentions) and our idea of context aggregation. Therefore, we will incorporate a paragraph in the introduction. In the same paragraph it seems reasonable to briefly discuss which kind of CNNs are used (Diffusion-Convolutional CNN, Atwood et al., 2015) for the graph based semantic head. The work of Atwood et al. will be included in our references.\n\nFinally, we would like to briefly answer two remaining questions of the reviewer:\n\n\"The sigma parameters is set very small (0.1). This would mean that the Gaussians decay within a pixel distance. Is there any connectivity left then? Or do you use normalized coordinates?\"\n\nAs the reviewer suspects we use normalized coordinates (in the range of -1 to 1). We will add this information in our final submission.\n\n\"Is the computation time indeed reduced compared to e.g. the U-Nets?\"\n\nAt this point, our SSPNet is slower than the U-Net. We believe that this is due to our current implementation that uses im2col and col2im operations for patch extraction and Hough voting, respectively. As we only need to extract few samples from an image this is highly inefficient and we hope to speed up processing with a more suitable operation, e.g. pytorchs grid sampling function.\n\n"}], "comment_replyto": ["SkxccsWxxE", "rkxjAyO3QE", "B1gWGkX5mN", "S1l4u1a_m4"], "comment_url": ["https://openreview.net/forum?id=SkxccsWxxE¬eId=S1lclKSjNE", "https://openreview.net/forum?id=SkxccsWxxE¬eId=rkg-iqBi4N", "https://openreview.net/forum?id=SkxccsWxxE¬eId=S1ligirsNV", "https://openreview.net/forum?id=SkxccsWxxE¬eId=B1gtznBsVV"], "meta_review_cdate": 1551356577960, "meta_review_tcdate": 1551356577960, "meta_review_tmdate": 1551881978634, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The paper has received positive feedback for all three reviewers. Some of the concerns raised by the reviewers were addressed in the author rebuttal. I'd suggest the authors applying important aspects of the reviewer comments in the final version of the paper. \nThe paper is on an interesting topic and proposes an interesting technique for edge detection on a semantic level!", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SkxccsWxxE¬eId=H1x9sGUrLN"], "decision": "Accept"}