AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_S1lhkdKkeV.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
29.1 kB
{"forum": "S1lhkdKkeV", "submission_url": "https://openreview.net/forum?id=S1lhkdKkeV", "submission_content": {"title": "Scalable Neural Architecture Search for 3D Medical Image Segmentation", "authors": ["Sungwoong Kim", "Ildoo Kim", "Sungbin Lim", "Chiheon Kim", "Woonhyuk Baek", "Hyungjoo Cho", "Boogeon Yoon", "Taesup Kim"], "authorids": ["swkim@kakaobrain.com", "ildoo.kim@kakaobrain.com", "sungbin.lim@kakaobrain.com", "chiheon.kim@kakaobrain.com", "wbaek@kakaobrain.com", "joysquare@snu.ac.kr", "eric.yoon@kakaobrain.com", "taesup.kim@umontreal.ca"], "keywords": ["AutoML", "Neural Architecture Search", "Medical Image Segmentation"], "TL;DR": "Scalable Neural Architecture Search for 3D Medical Image Segmentation", "abstract": "In this paper, a neural architecture search (NAS) framework is formulated for 3D medical image segmentation, to automatically optimize a neural architecture from a large design space. For this, a novel NAS framework is proposed to produce the structure of each layer including neural connectivities and operation types in both of the encoder and decoder of a target 3D U-Net. In the proposed NAS framework, having a sufficiently large search space is important in generating an improved network architecture, however optimizing over such a large space is difficult due to the extremely large memory usage and the long run-time originated from high-resolution 3D medical images. Therefore, a novel stochastic sampling algorithm based on the continuous relaxation on the discrete architecture parameters is also proposed for scalable joint optimization of both of the architecture parameters and the neural operation parameters. This makes it possible to maintain a large search space with small computational cost as well as to obtain an unbiased architecture by reducing the discrepancy between the training-time and test-time architectures. On the 3D medical image segmentation tasks with a benchmark dataset, an automatically designed 3D U-Net by the proposed NAS framework outperforms the previous human-designed 3D U-Net as well as the randomly designed 3D U-Net, and moreover this optimized architecture is more compact and also well suited to be transferred for similar but different tasks.", "pdf": "/pdf/5096b127dcbabc70050fbfeb330b03376ecb2e19.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "kim|scalable_neural_architecture_search_for_3d_medical_image_segmentation"}, "submission_cdate": 1544685540021, "submission_tcdate": 1544685540021, "submission_tmdate": 1545069828380, "submission_ddate": null, "review_id": ["S1xdzbdD7N", "BJeEFwM8QV", "B1xcUzicGV"], "review_url": ["https://openreview.net/forum?id=S1lhkdKkeV&noteId=S1xdzbdD7N", "https://openreview.net/forum?id=S1lhkdKkeV&noteId=BJeEFwM8QV", "https://openreview.net/forum?id=S1lhkdKkeV&noteId=B1xcUzicGV"], "review_cdate": [1548349712470, 1548261243814, 1547510354453], "review_tcdate": [1548349712470, 1548261243814, 1547510354453], "review_tmdate": [1548856726032, 1548856722281, 1548856706259], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper34/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper34/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper34/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["S1lhkdKkeV", "S1lhkdKkeV", "S1lhkdKkeV"], "review_content": [{"pros": "The authors investigate a neural architecture search (NAS) framework for U-Net optimization in the setting of 3D medical image segmentation. \n\nTo be more specific, what is subject to the automatic tuning is the precise arrangement of operations within the network \"cells\" (the sequence of pooling / concat / conv / skip connections within a layer); the high-level design of the encoder-decoder path is fixed (in line with that of a standard U-Net).\n\n| Strengths:\n- I find this line of research interesting. It has also objectively received a lot of recent interest in the ML community, with the potential to design new architectures that are both simple and with more expressive power than what has been proposed so far.\n- The related work seems to be adequately cited in parts 1 and 2.\n- The application to 3D image segmentation presents serious challenges in itself, and it indeed seems to have good novelty, although I am not an expert. The method is sound, in particular the use of the Gumbel-softmax trick is a good answer to the challenge of scaling to large architectures.\n\n| Weaknesses:\n- Key claims are made without basis. The interpretation of the experimental validation is distorted to suit the claims.\n- It is unclear how to interpret the (slightly) higher score of \"SCNAS (transfer)\" in Table 1; and whether it actually makes a case for SCNAS as stated in the paper.\n- There is a lot of repetition/verbosity in the first 6 pages. The proposed approach is reintroduced 3 times in similar terms. The respective focus of introduction vs. related work is unclear. The contributions of the paper are in turn less clear. Whether the contribution is specifically in the application to 3D medical image segmentation or also w.r.t. the methodology itself could be clearer.\n\n| Main comment:\nFrom the abstract to the experimental section, to the conclusion, the authors make a repeated claim w.r.t. performance that is contradicted by experimental validation:\n- Abstract: \"On the 3D medical image segmentation tasks with a benchmark dataset, an automatically designed 3D U-Net by the proposed NAS framework outperforms the previous human-designed 3D U-Net as well as the randomly designed 3D U-Net\"\n- Introduction: \"Experimental results [...] show that in comparison to the previous human-designed 3D U-Net, the network obtained by the proposed scalable NAS leads to better performances\"\n- Experiments/Results: \"Table 1 shows that the SCNAS produced better architectures than the (human-designed) 3D U-ResNet as well as the randomly designed 3D U-Net in terms of the overall performances on all three tasks.\"\n- Conclusion: \"Empirical evaluation demonstrates that the automatically optimized network via the proposed NAS outperforms the manually designed 3D U-Net.\"\n\nHowever, in Table 1. all the scores are within (plus or minus) a fraction of dice or a dice point (SCNAS transfer excluded, see comments below.).\n\nThe paper would be stronger (i) without the contradiction between claims and results; and (ii) if the emphasis was shifted away from the (lack of) experimental evidence for improved performance of the NAS, to a more thorough empirical analysis of the auto-ML mechanism, with an open discussion. \n\n\n| Miscellaneous:\n- \"It is noted that unlike these architecture hyperparameter optimizations, we use the complete NAS to obtain the entire topology of the network architecture in this work.\" Is that the case? My understanding is that the high-level architecture (U-Net) is fixed. The distinction is not a minor one in terms of outcome. The proposed approach optimizes over cell architectures, where a cell is e.g. an encoding unit in the encoder path. Same nature cells are further restricted to the same topology. Such optimization appears to yield rather intricate cell designs (cf. appendices) but does not significantly improve performance (Table 1.). \n- It is unclear how to interpret the (slightly) higher score of \"SCNAS (transfer)\" in Table 1. Are baselines trained on 20 (heart) / 32 (prostate) images, vs. \"SCNAS (transfer)\" being trained on 400+ images and fine-tuned on the relevant datasets? If so, what numbers are obtained for baselines when (pre)training and fine-tuning in a similar fashion? \nRight now, as SCNAS performs similarly to the baselines, with only the transferred architecture earning a couple of DSC points, a natural interpretation is that the experiments make a (limited) case for pretraining on a bigger dataset (rather than for autoML).\n- The paper could elaborate some more on Algorithm 2 (the sampling of two operations for computational reasons) and what the concrete effects of this compromise are.", "cons": ".", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The paper makes the case that existing NAS approaches, especially those that replace discrete variables with continuous approximations, are insufficient for very large architectures such as those used in 3D segmentation, and the authors select the widely used U-Net as their exemplary case. This makes a compelling case for the proposed method, which consists of drawing two entries from the probability vector of possible realizations (operations in this case) at a given iteration, clipping the remaining probabilities to zero and renormalizing, with the benefit of reducing the number of activations from N (8 in this work) to 2 for each operator that is part of the optimization process. While no quantitative analysis of computational savings is offered, it is clear that they are considerable.", "cons": "While the method certainly has merit, the experimental results do not do it justice. The segmentation performance obtained with SCNAS is below state-of-the-art and even compared to their own baseline (3D U-ResNet) the improvement is not convincing. The former could in part be explained by the missing data augmentation (the authors address this) while the latter could be the simple fact that the baselines can already cover a large enough function space to find a good approximation of the target function (given the limited amount of training data), so that the architecture search has negligible influence. I list my concerns in more detail below.\n\nMajor: \n\n1. The baseline models chosen for this study do not represent the state of the art. The reader can only be convinced of the merit of this paper if it can outperform the state of the art, which in the case for the heart and prostate dataset used here is nnUNet, the winning contribution to the Medical Segmentation Decathlon (Isensee et al., 2018). While Isensee et al. used an ensemble of different models for their final submission, their paper also reports five-fold cross-validation results with their 3D UNet (without ensembling) on these datasets. The 3D U-Net results reported by Isensee et al. are better than any of the results reported in this manuscript. In addition, the 3D U-Net used by Isensee et al. is very basic with no major architectural variations. As such, it could have been an ideal baseline candidate to demonstrate if SCNAS can really advance the state of the art. It should further be pointed out that, in general, the 3D UNet used in Isensee et al. is rather similar to the UNets used throughout this work with the sole big difference being the number of pooling operations ([3,3,3] in the case of this manuscript vs [5,5,5] (heart, brain tumor) and [2,5,5] (prostate) in Isensee et al., see Table 1). The statement \u201cWe conjecture that Isensee et al. (2018) might be benefit from complicated pre-/post-procedures and thus obtained slightly better performances than the SCNAS.\u201c does not sufficiently explain the difference in performance. In fact, the preprocessing used by Isensee et al. for MRI images is basically identical to the procedure used in this paper.\n\n2. In Equations 2 and 3, the authors state that the optimization of the edge weights is done on the \u2018validation set\u2019. It is unclear however to what set this actually refers to. In the experiments, the authors report results of a five-fold cross-validation. In order to draw any kind of meaningful conclusion of the this cross-validation results, it is important to clarify whether the validation set of the splits were used for this optimization or whether the training set was again split into two sets. If the performance of the models is estimated via cross-validation then the validation split cannot be used for any kind of optimization!\n\n3. \u201cSCNAS produces a more generalizable neural architecture for the similar tasks of 3D MRI image segmentation \u201c. There is no evidence in the paper that would support this statement. The authors must also transfer their baseline models and see if the performance of the transferred baseline models is better or worse than that of SCNAS.\n\n4. The authors state that SCNAS performs significantly better than the other approaches. Even if we ignore the previous concern, I would only buy that claim for the peripheral prostate zone. In all other categories the standard deviations (?, see also minor-8) are too large to support this statement without an actual test.\n\n\nMinor: \n\nThe authors do not give sufficient details about the \u201cRandom Search\u201d result. How was this network architecture obtained? How many different configurations were drawn randomly and how was the best model selected?\n\nTraining large 3D segmentation networks is very computationally expensive. It is clear that the authors must have had access to a fairly large GPU cluster. If would be interesting to have more specific information about how many GPUs were used (in total and per model) and how long one of the models needed to train. \n\nHow did the authors handle the different number of input channels when transferring their network from brain tumour (4 channels) to heart (1 channel) and prostate (2 channels)?\n\nThe authors state that they compare SCNAS against a 3D U-ResNet and an attention U-Net, but no results are reported for the attention U-Net.\n\n\u201cthe input images were first resized for all voxel spacings to be physically equal using the given meta-data\u201c. It is unclear what spacing the data was resampled to.\n\n\u201cNote that unlike Isensee et al. (2018), any heuristic pre-/post-processing techniques including data augmentation, network-cascade, and prediction-ensemble were not adopted in this evaluation to solely examine the effects by the use of NAS in designing the network architecture.\u201d: While it makes sense to drop ensembling and cascaded architectures for this work, the argumentation that the lack or data augmentation better isolated the effect of NAS is lackluster. In fact, including data augmentation would likely have improved the results somewhat across the board and thus made the results more convincing.\n\nFigure 2 a): in brain tumor segmentation it is more common to show contrast enhanced T1 sequences alongside with T2 or FLAIR to allow the reader to see all parts of the tumor properly. If only one of the sequences is shown then this should probably be the contrast enhanced T1 because enhancing tumor is not visible in the sequence presented here. \n\nWhat type of error is reported in table 1?\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This paper presented a neural architecture search to optimize the structure of each layer of a 3D U-net. Besides the methodology, the authors also provided stochastic sampling algorithm to find the optimal parameters. Through benchmarking, the proposed method showed superior results and compact output model compare to other methods. ", "cons": "The experiment result is not strong, the link to in the paper leads to the competition website: http://medicaldecathlon.com/results.html. Most of the results posted was better than the results in the paper.\n\n1. Over reference. In my opinion, citing a paper once at where the paper is first mentioned is enough).\n2. I suggest add more explanation of the data and an picture example in the experiment section. \n3. I would see the run time difference for each method as well.\n4. Good to remind reader the evaluation metric again in table 1 ", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["H1gGWHPnNE", "HyeV8rPh44", "SygYcHD24N", "B1xPCSw3EE"], "comment_cdate": [1549722874492, 1549722956297, 1549723025073, 1549723087015], "comment_tcdate": [1549722874492, 1549722956297, 1549723025073, 1549723087015], "comment_tmdate": [1555945996409, 1555945996191, 1555945995935, 1555945995676], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper34/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper34/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper34/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper34/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Author Response to Reviewers", "comment": "We thank the reviewers for their careful reviews.\nThe questions raised by the reviewers are answered in the followings."}, {"title": "Author Response to Reviewer #1", "comment": "\n1) Q: It is unclear how to interpret the (slightly) higher score of \"SCNAS (transfer)\" in Table 1; and whether it actually makes a case for SCNAS as stated in the paper. Claim w.r.t. performance that is contradicted by experimental validation. All the scores are within (plus or minus) a fraction of dice or a dice point.\n A: In comparison to baseline architectures, an optimized architecture by SCNAS slightly improved the performance, in terms of average DICE score with similar variances, with much smaller FLOPs and the number of operation parameters. Moreover, when tasks have scarce data, transferring an optimized architecture by SCNAS improved the performances much more. We will revise the paper to make these statements more clear.\n\n2) Q: The respective focus of introduction vs. related work is unclear. The contributions of the paper are in turn less clear. \n A: In the introduction section, we first introduce a general background of AutoML, NAS, and 3D U-Net for medical segmentation. Then, we roughly describe the proposed method and its motivation and summarize the main contribution: it is the first work to exploit a completed NAS framework for 3D medical image segmentation as well as a new NAS algorithm itself for employing it.\n In the related work section, we explain detailed differences between the proposed NAS and the previous NAS from the algorithm perspective in order to handle the large network and its large search space. In addition, detailed introductions of the recent (manually-designed) 3D U-Net for medical segmentation and simple hyperparameter optimizations on 2D U-Net are presented.\n We will revise these two sections to reduce a lot of repetition/verbosity and make these respective focuses more clear.\n\n3) Q: Optimization over cell architectures vs. previous architecture hyperparameter optimizations. The limitation by fixing high-level architecture (U-Net).\n A: Optimizing the overall cell architectures including neural connection topology is completely different from naively optimizing the number of filters and filter size under a fixed architecture. Stacking optimized cell architecture to make a whole network is a recent popular approach in NAS, which shows better performances with small computational cost for search and also allows easily to make large networks with a given cell architecture by increasing the number of layers or the number of convolution channels and thus transfer to complex tasks.\n\n4) Q: Transfer baseline models.\n A: In this experiment, we would like to compare the performances between different architectures. Therefore, we transfer an architecture not weights. The weights in the transferred architecture are trained from scratch. Of course, weights except the stem and the final layers can be transferred as initial weights for fine-tuning under any architecture, but we observed that this weight transfer cannot help to improve the performances in our experiments. We will include this statement in the revised paper.\n\n5) Q: The sampling of two operations is due to computational reasons and what are the concrete effects of this compromise?\n A: We tried sampling one operation at a time, but the performance was not improved because of the use of high bias architecture and insufficient architecture variation (exploration) especially in the early stage of training. We will include this statement in the revised paper."}, {"title": "Author Response to Reviewer #3", "comment": "\n1) Q: The segmentation performance obtained with SCNAS is below state-of-the-art and even compared to their own baseline (3D U-ResNet). The improvement is not convincing.\n A: In comparison to baseline architectures, an optimized architecture by SCNAS slightly improved the performance, in terms of average DICE score with similar variances, with much smaller FLOPs and the number of operation parameters. 3D U-Net in nnU-Net (Isensee et al., 2018), which can be seen as the state-of-the-art single model, used data augmentation and ensemble prediction. We would like to demonstrate solely a better architecture by SCNAS without these pre-/post-procedures. Of course, our baseline 3D U-ResNet and found architecture from SCNAS can also improve the performances by these procedures, and we leave it for future work.\n\n2) Q: The 3D U-Net results reported by Isensee et al. are better than any of the results reported in this manuscript. In fact, the preprocessing used by Isensee et al. for MRI images is basically identical to the procedure used in this paper.\n A: We did not use any data augmentation that used in nnU-Net such as random rotation, random scaling, random elastic deformations, gamma correction augmentation, and mirroring. In addition, we did not perform test time data augmentation that also used in nnU-Net. We leave the use of data augmentation with SCNAS for future work.\n\n3) Q: It is important to clarify whether the validation set of the splits were used for this optimization or whether the training set was again split into two sets.\n A: The training set after the validation split was again split into two sets for respectively optimizing the architecture parameters and operation parameters.\n\n4) Q: Transfer baseline models.\n A: In this experiment, we would like to compare the performances between different architectures. Therefore, we transfer an architecture not weights. The weights in the transferred architecture are trained from scratch. Of course, weights except the stem and the final layers can be transferred as initial weights for fine-tuning under any architecture, but we observed that this weight transfer cannot help to improve the performances in our experiments. We will include this statement in the revised paper.\n\n5) Q: The authors do not give sufficient details about the \u201cRandom Search\u201d result. How was this network architecture obtained? How many different configurations were drawn randomly and how was the best model selected?\n A: We uniformly sampled an operation for each edge to make a random architecture. 20 random architectures from different random seeds were trained with equal hyperparameters, and selected the best architecture in terms of the validation dice score. We will include this statement in the revised paper.\n\n6) Q: How many GPUs were used (in total and per model) and how long one of the models needed to train. \n A: For the task of brain tumor segmentation our SCNAS took one day on 64 V100 GPUs while for the other segmentations it took one day on 4 V100 GPUs.\n\n7) Q: How did the authors handle the different number of input channels when transferring their network from brain tumour (4 channels) to heart (1 channel) and prostate (2 channels)? \n A: We did not transfer operation parameters such as kernel weights. We just modified the stem cell architecture, which was the predefined first convolutional block, to match the input channels.\n\n8) Q: The authors state that they compare SCNAS against a 3D U-ResNet and an attention U-Net, but no results are reported for the attention U-Net\n A: Attentions mechanism suggested in Oktay et al. is already included in our 3D U-ResNet implementation. We will include this statement in the revised paper.\n\n9) Q: The input images were first resized for all voxel spacings to be physically equal using the given meta-data. It is unclear what spacing the data was resampled to.\n A: All images were resampled to have an equal voxel spacing of 0.7mm x 0.7mm x 0.7mm. We will include this statement in the revised paper.\n\n10) Q: Including data augmentation would likely have improved the results somewhat across the board and thus made the results more convincing.\n A: We agree with this. We leave the use of data augmentation with SCNAS for future work.\n\n11) Q: Figure 2 a), in brain tumor segmentation it is more common to show contrast enhanced T1 sequences alongside with T2 or FLAIR to allow the reader to see all parts of the tumor properly\n A: We will revise the paper to show all input channels in Figure 2 a).\n\n12) Q: What type of error is reported in table 1? \n A: It is the average dice similarity coefficient. We will revise table 1 to clearly and properly present this metric."}, {"title": "Author Response to Reviewer #2", "comment": "\n1) Q: The experiment result is not strong, the link to in the paper leads to the competition website: http://medicaldecathlon.com/results.html. Most of the results posted was better than the results in the paper.\n A: Our cross validation scores were computed on the training set while the scores reported in the leader board were obtained using the test set of which ground-truth labels are not publicly released. Please refer to the cross validation scores obtained by nnU-Net (Isensee et al., 2018) which officially ranks 1 in the challenge.\n\n2) Q: Citing a paper once at where the paper is first mentioned is enough.\n A: We will reflect it by revising the introduction and related works\n\n3) Q: More explanation of the data and a picture example in the experiment section.\n A: We will reflect it by revising the experiment section.\n\n4) Q: Run time difference for each method as well.\n A: FLOPs that we reported can reflect it.\n\n5) Q: Good to remind reader the evaluation metric again in table 1.\n A: We will revise table 1 to clearly and properly present the metric."}], "comment_replyto": ["S1lhkdKkeV", "S1xdzbdD7N", "BJeEFwM8QV", "B1xcUzicGV"], "comment_url": ["https://openreview.net/forum?id=S1lhkdKkeV&noteId=H1gGWHPnNE", "https://openreview.net/forum?id=S1lhkdKkeV&noteId=HyeV8rPh44", "https://openreview.net/forum?id=S1lhkdKkeV&noteId=SygYcHD24N", "https://openreview.net/forum?id=S1lhkdKkeV&noteId=B1xPCSw3EE"], "meta_review_cdate": 1551356614531, "meta_review_tcdate": 1551356614531, "meta_review_tmdate": 1551703162104, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The general idea of optimising model the hyper-parameters (number of channels, layers etc) of a deep fully-convolutional segmentation network automatically in an efficient way is an important research area. The reviewers all find some interest in this work, which formulates the architecture search as stochastic sampling using a continuous relaxation. In addition to some minor comments about the details of the presentation and writing, their main criticism revolves around the fact that 1) the automatic search only marginally improves the baseline model and 2) this baseline is substantially worse than state-of-the-art pipelines applied to the medical decathlon challenge. The authors rebut this with two points: first, a reduction in floating point operations is reached for the optimised model and second, data augmentation and ensembling are not yet used. None of the reviewers changed their scores after the rebuttal and I am also convinced that this response is only partially valid. 1) When making a point of reduced floating point operations, pruning methods (cf. e.g. Deep Neural Network Compression by In-Parallel Pruning-Quantization PAMI '19) would be a natural choice as a comparison, which have been shown to also yield moderate accuracy improvements, but those are not mentioned. 2) the influence of data augmentation (and to a lesser degree ensembling) on the hyper-parameter choice cannot be fully excluded in order to show that NAS improves final performance. With that in mind, it is hard to justify training times of 64 days of V100 GPUs for one model and a fairly small gain (which we cannot be sure remains after augmentation), so it would be of greater impact to demonstrate an even more efficient sampling strategy. Hence, despite some nice ideas and large-scale experiments, the negatives prevail in my opinion and I do not recommend acceptance at this point.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=S1lhkdKkeV&noteId=rJl1AM8SI4"], "decision": "Reject"}