Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "HJeJx4XxlN", "submission_url": "https://openreview.net/forum?id=HJeJx4XxlN", "submission_content": {"title": "A Hybrid, Dual Domain, Cascade of Convolutional Neural Networks for Magnetic Resonance Image Reconstruction", "authors": ["Roberto Souza", "R. Marc Lebel", "Richard Frayne"], "authorids": ["roberto.medeirosdeso@ucalgary.ca", "marc.lebel@ge.com", "rfrayne@ucalgary.ca"], "keywords": ["Magnetic resonance imaging", "image reconstruction", "compressed sensing", "deep learning"], "TL;DR": "A hybrid cascade architecture for MR reconstruction", "abstract": "Deep-learning-based magnetic resonance (MR) imaging reconstruction techniques have the potential to accelerate MR image acquisition by reconstructing in real-time clinical quality images from k-spaces sampled at rates lower than specified by the Nyquist-Shannon sampling theorem, which is known as compressed sensing. In the past few years, several deep learning network architectures have been proposed for MR compressed sensing reconstruction. After examining the successful elements in these network architectures, we propose a hybrid frequency-/image-domain cascade of convolutional neural networks intercalated with data consistency layers that is trained end-to-end for compressed sensing reconstruction of MR images. We compare our method with five recently published deep learning-based methods using MR raw data. Our results indicate that our architecture improvements were statistically significant (Wilcoxon signed-rank test, p<0.05). Visual assessment of the images reconstructed confirm that our method outputs images similar to the fully sampled reconstruction reference. \n", "pdf": "/pdf/59db02fbcbae932078ee5df2135c209dd2961fff.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "souza|a_hybrid_dual_domain_cascade_of_convolutional_neural_networks_for_magnetic_resonance_image_reconstruction", "_bibtex": "@inproceedings{souza:MIDLFull2019a,\ntitle={A Hybrid, Dual Domain, Cascade of Convolutional Neural Networks for Magnetic Resonance Image Reconstruction},\nauthor={Souza, Roberto and Lebel, R. Marc and Frayne, Richard},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=HJeJx4XxlN},\nabstract={Deep-learning-based magnetic resonance (MR) imaging reconstruction techniques have the potential to accelerate MR image acquisition by reconstructing in real-time clinical quality images from k-spaces sampled at rates lower than specified by the Nyquist-Shannon sampling theorem, which is known as compressed sensing. In the past few years, several deep learning network architectures have been proposed for MR compressed sensing reconstruction. After examining the successful elements in these network architectures, we propose a hybrid frequency-/image-domain cascade of convolutional neural networks intercalated with data consistency layers that is trained end-to-end for compressed sensing reconstruction of MR images. We compare our method with five recently published deep learning-based methods using MR raw data. Our results indicate that our architecture improvements were statistically significant (Wilcoxon signed-rank test, p{\\ensuremath{<}}0.05). Visual assessment of the images reconstructed confirm that our method outputs images similar to the fully sampled reconstruction reference. \n},\n}"}, "submission_cdate": 1544725478897, "submission_tcdate": 1544725478897, "submission_tmdate": 1561399668386, "submission_ddate": null, "review_id": ["SJgPI6RD74", "SkeF1qDYGV", "BJgIJs6PMN"], "review_url": ["https://openreview.net/forum?id=HJeJx4XxlN¬eId=SJgPI6RD74", "https://openreview.net/forum?id=HJeJx4XxlN¬eId=SkeF1qDYGV", "https://openreview.net/forum?id=HJeJx4XxlN¬eId=BJgIJs6PMN"], "review_cdate": [1548377423292, 1547430368689, 1547324125868], "review_tcdate": [1548377423292, 1547430368689, 1547324125868], "review_tmdate": [1548856727148, 1548856704877, 1548856704453], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper85/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper85/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper85/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJeJx4XxlN", "HJeJx4XxlN", "HJeJx4XxlN"], "review_content": [{"pros": "1. The paper is well written with cleared method description and experimental settings.\n\n2. Well-organised comparison studies.\n\n3. The proposed method is novel.", "cons": "1. The major problem I found for the experiment is that the undersampling pattern is less realistic. For example, the 2D Gaussian undersampling.\n\n2. Some important and relevant studies are neglected, and should be considered to add into the references:\n\nSchlemper J. et al. (2018) Stochastic Deep Compressive Sensing for the Reconstruction of Diffusion Tensor Cardiac MRI. In: Medical Image Computing and Computer Assisted Intervention \u2013 MICCAI 2018. (pp 295-303). Springer, Cham.\n\nYu, Simiao, et al. \"Deep de-aliasing for fast compressive sensing MRI.\" arXiv preprint arXiv:1705.07137 (2017).\n\n3. The authors should stick on 'undersampling rate' or 'sampling rate', don't mix them to create some confusion.\n\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}, {"pros": "1. New hybrid cascade model for deep-learning-based magnetic resonance (MR) imaging reconstruction techniques\n2. This architecture improvements were statistically signi\fcant (Wilcoxon signed-rank test, p < 0:05)\n3. Visual assessment of the images reconstructed con\frm that our method outputs images similar to the fully sampled reconstruction reference.", "cons": "There is no detail of evaluating pSNR. What's the PSNR of reference image?\nIt would be better to present magnified images in Figure 3 and 5.\n", "rating": "3: accept", "confidence": "1: The reviewer's evaluation is an educated guess"}, {"pros": "- Well written, well referenced, very clearly written\n- Comprehensive comparison between many state-of-the-art algorithms\n- The paper contains many fruitful insights: firstly, it demonstrates -- via over three different architectures -- that the unrolled approach seems to outperform a single multi-scale architecture such as U-net. Secondly, the image domain reconstruction should be done before k-space reconstruction. Thirdly, the authors make the effort of understanding the unrolled architecture.", "cons": "- Lack of novelty: the only difference from KIKI-net is the fact that it is now trained end-to-end and the order is improved. However, this is interesting because in KIKI-net paper, they showed that the proposed order is better than IKIK-net. This may be attributed to end-to-end training? Please add further details here.\n\n- The paper is missing the detail about the number of parameters of each network. Because of this, I cannot make a fair comparison between the methods. In particular, how many convolution layers are used in each subnet & how many cascades for (a) KIKI-net and (b) Deep Cascade? For example, Hybrid net has 5 conv. layers for subnet, 6 cascades. I wonder if the parameters are matched? Please report them and redo the experiment by matching them.\n\n- Please report the SSIM value, the number of parameters and the speed of each method.", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["r1eh4o3lV4", "HJl0Ys3x4E", "S1x86nheEE"], "comment_cdate": [1548958516148, 1548958597764, 1548958910058], "comment_tcdate": [1548958516148, 1548958597764, 1548958910058], "comment_tmdate": [1555946049133, 1555946048919, 1555946048699], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper85/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper85/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper85/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer3", "comment": "We would like to thank the reviewer for their comments. We will include discussion concerning the valuable studies mentioned by the reviewer in the \u201cBrief Literature Review\u201d section of the paper. We will also use only the expression \u2018sampling rate\u2019 throughout the paper to avoid creating any kind of confusion. Concerning the undersampling pattern, we agree that the Gaussian undersampling may not be optimal, we will include in the manuscript some discussion of other patterns. In future studies we will also use more realistic undersampling patterns, such as variable density Poisson discs."}, {"title": "Response to AnonReviewer1", "comment": "We would like to thank the reviewer for their comments. The reference image for computing the pSNR is the fully sampled reconstruction. We will include a sentence in the manuscript to make this information clear. In the revised version of the paper, we will include magnified versions of the figures subject to space limitations."}, {"title": "Response to AnonReviewer2", "comment": "We would like to thank the reviewer for their comments. In the revised version of the paper we will include SSIM values in the results table and we will further discuss number of parameters and speed of each method within our page limitation. \n\nConcerning training the model, we experimented training the model each sub-network at a time (just like in the KIKI-net paper) and training it end-to-end. There were no statistically significant differences between the two training methods and, unlike KIKI-net paper, we did not have any stability issues when fully training the network end-to-end. We will include this discussion in the paper. We attribute the difference in the order of sub-networks domain to the local receptive field of the networks. We have that discussion in the paper:\n\u201cThe first CNN block in the cascade, unlike KIKI-net, is an image domain CNN. The reason for this is that k-space is usually heavily undersampled at higher spatial frequencies. If the cascade started with a k-space CNN block, there would potentially be regions where the convolutional kernel would have no signal to operate upon. Thus, a deeper network having a larger receptive field would be needed, which would increase reconstruction times. By starting with an image domain CNN block and because of the global property of the FT, the output of this network has a corresponding k-space that is now complete. This allows the subsequent CNN block, which is in k-space domain, to perform better due to the absence region without signal\u201d, which is indicated by Figure 4 of the paper. This finding may be depending on sampling strategies (Gaussian, Poisson,...) and needs to be further investigated. We will include this as a potential future work.\n"}], "comment_replyto": ["SJgPI6RD74", "SkeF1qDYGV", "BJgIJs6PMN"], "comment_url": ["https://openreview.net/forum?id=HJeJx4XxlN¬eId=r1eh4o3lV4", "https://openreview.net/forum?id=HJeJx4XxlN¬eId=HJl0Ys3x4E", "https://openreview.net/forum?id=HJeJx4XxlN¬eId=S1x86nheEE"], "meta_review_cdate": 1551356573711, "meta_review_tcdate": 1551356573711, "meta_review_tmdate": 1551881974141, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "All the reviewers agree that this paper is a strong and well-written contribution providing valuable insights into the optimal neural network architecture for MR reconstruction and a comprehensive evaluation with many recent related works. \n\nThe reviewers have pointed out that the methods were evaluated using an oversimplified undersampling pattern and that there is a small amount of novelty compared to one of the related works. \n\nOverall the strengths of the paper overweight its few weaknesses. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HJeJx4XxlN¬eId=rygUoGLrUV"], "decision": "Accept"} |