Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "Syest0rxlN", "submission_url": "https://openreview.net/forum?id=Syest0rxlN", "submission_content": {"title": "Joint Learning of Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets", "authors": ["Nicolas Roulet", "Diego Fernandez Slezak", "Enzo Ferrante"], "authorids": ["nroulet@dc.uba.ar", "dfslezak@dc.uba.ar", "eferrante@sinc.unl.edu.ar"], "keywords": ["Brain image segmentation", "multi-task learning", "heterogeneous datasets", "convolutional neural networks"], "TL;DR": "We propose the adaptive cross entropy loss to perform joint learning of brain lesion and anatomy segmentation from disjoint datasets, that were labeled independently according to only one of these tasks", "abstract": "Brain lesion and anatomy segmentation in magnetic resonance images are fundamental tasks in neuroimaging research and clinical practise. Given enough training data, convolutional neuronal networks (CNN) proved to outperform all existent techniques in both tasks independently. However, to date, little work has been done regarding simultaneous learning of brain lesion and anatomy segmentation from disjoint datasets.\n\nIn this work we focus on training a single CNN model to predict brain tissue and lesion segmentations using heterogeneous datasets labeled independently, according to only one of these tasks (a common scenario when using publicly available datasets). We show that label contradiction issues can arise in this case, and propose a novel adaptive cross entropy (ACE) loss function that makes such training possible. We provide quantitative evaluation in two different scenarios, benchmarking the proposed method in comparison with a multi-network approach. Our experiments suggest ACE loss enables training of single models when standard cross entropy and Dice loss functions tend to fail. Moreover, we show that it is possible to achieve competitive results when comparing with multiple networks trained for independent tasks.", "pdf": "/pdf/7f3c2940569874f8779cc98f46c11738407e2543.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "roulet|joint_learning_of_brain_lesion_and_anatomy_segmentation_from_heterogeneous_datasets", "_bibtex": "@inproceedings{roulet:MIDLFull2019a,\ntitle={Joint Learning of Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets},\nauthor={Roulet, Nicolas and Slezak, Diego Fernandez and Ferrante, Enzo},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=Syest0rxlN},\nabstract={Brain lesion and anatomy segmentation in magnetic resonance images are fundamental tasks in neuroimaging research and clinical practise. Given enough training data, convolutional neuronal networks (CNN) proved to outperform all existent techniques in both tasks independently. However, to date, little work has been done regarding simultaneous learning of brain lesion and anatomy segmentation from disjoint datasets.\n\nIn this work we focus on training a single CNN model to predict brain tissue and lesion segmentations using heterogeneous datasets labeled independently, according to only one of these tasks (a common scenario when using publicly available datasets). We show that label contradiction issues can arise in this case, and propose a novel adaptive cross entropy (ACE) loss function that makes such training possible. We provide quantitative evaluation in two different scenarios, benchmarking the proposed method in comparison with a multi-network approach. Our experiments suggest ACE loss enables training of single models when standard cross entropy and Dice loss functions tend to fail. Moreover, we show that it is possible to achieve competitive results when comparing with multiple networks trained for independent tasks.},\n}"}, "submission_cdate": 1544736387080, "submission_tcdate": 1544736387080, "submission_tmdate": 1561398348868, "submission_ddate": null, "review_id": ["BkeeerOTmV", "Bke7FMqhX4", "H1exCZfnXN"], "review_url": ["https://openreview.net/forum?id=Syest0rxlN¬eId=BkeeerOTmV", "https://openreview.net/forum?id=Syest0rxlN¬eId=Bke7FMqhX4", "https://openreview.net/forum?id=Syest0rxlN¬eId=H1exCZfnXN"], "review_cdate": [1548743911910, 1548685947004, 1548652999756], "review_tcdate": [1548743911910, 1548685947004, 1548652999756], "review_tmdate": [1550593939619, 1548856758241, 1548856749109], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper103/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper103/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper103/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Syest0rxlN", "Syest0rxlN", "Syest0rxlN"], "review_content": [{"pros": "The paper proposed an interesting approach to handle the label conflicts between different brain datasets, via using a new loss function modified from the cross-entropy.\n\nThe paper is well-written and well-organized.", "cons": "The novelty of the paper is limited. The major contribution is a loss function, and the rest of paper adapted established works. And the overall performance is not convincing.\n\nIn the definition of the proposed loss function, it is not clear that the sum is inside of log function. Why not placing the sum outside the log function? Please further explain the loss function theoretically.\n\nThere might be issues with multi-class cross entropy function. Because the objects like tumor are very small, then the loss might be biased during training with unbalanced sampling. It is better to use weights for either cross-entropy or the proposed loss function.\n \nThe experimental results did not show significant improvement using the proposed loss function over the multi-UNet. Also, it would be nice to compare with the state-of-the-art brain tumor segmentation methods (e.g. Myronenko, A., 2018. 3D MRI brain tumor segmentation using autoencoder regularization. arXiv preprint arXiv:1810.11654) in the experiments.\n\nWhat would be the outcome when considering all the datasets (brain tissue, WMH, and brain tumor) as one scenario?\n\nPlease compare with the performance using other segmentation networks to justify the advantage introduced by the new loss function.\n\n\nThanks for the response from the authors! Although the idea of the new loss function in the paper is interesting, the experimental results and explanation were not convincing. One major reason is that the baseline approaches were not strong enough to valid the proposed approach. Therefore, the original decision remains unchanged.", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "special_issue": ["Special Issue Recommendation"]}, {"pros": "This work deals with the problem of label contradiction in joint learning from multiple datasets with different labelsets on the same anatomy. The proposed solution is an adaptation of the cross-entropy loss, where the voxels with contradictory labels in different datasets are treated differently than usual. Overall, the paper is well-written, the application is well-motivated and the contribution is novel to the best of my knowledge.", "cons": "In my opinion, there are no major flaws in the paper. Having said this, here are a few things that I think could further improve it:\n\n1. What happens if the 'voxels that are not lesion background' are not penalized at all? I think it is important to compare this form of 'naive' adaptive loss to the proposed method.\n\n2. The quantitative results for the brain tissue + WMH experiment with the naive dice loss seem to be at par with the multi-unet and the ACE, but the qualitative results are considerably worse. Is the case shown in the qualitative results an outlier for this setup?\n\n3. After first reading the problem statement (Sec. 2), I was a little unclear as to what is exactly meant by union of all labelsets. Perhaps a sentence to clarify this (saying that the union refers to a set where the background label is over-written if it is foreground in the other dataset) might be helpful.\n\n4. In Sec. 2.1, it is said that the mini-batches are sampled with equal probability from all datasets and all classes. As voxel-wise labels are predicted, how is it ensured that the mini-batches contain, on average, an equal number of voxels from each class?\n\n5. In the text following Eq. 4, consider using 'labelled as anything but lesion background' instead of 'non-lesion background'. In my opinion, this would be clearer.\n\n6. Depicting the quantitative results in a table instead of the box plot might be better to appreciate the differences between the various methods.\n\n7. If possible, consider moving the qualitative results into the main text instead of the appendix.\n\n8. Finally, I am not sure if it is necessary to give the multi-unet benchmark the advantage of additional modalities as described in Appendix C, although this only strengthens the benchmark. Providing all experiments with the same inputs would be a cleaner setup and would help focus entirely on the label contradiction issue.\n\n9. practise --> practice.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The idea is simple, but interesting. The proposed loss function has the potential to train models across multiple datasets that complement to each other. The loss function addresses correctly the problem when multiple labels are distributed across several datasets and they need each other to produce a more complete output.\n\nA single model that can perform multiple task is preferable because it is more robust to image variations and has a lower computational load.\n\nThe problem statement is clear and the process to solve it through the experiments is also clear.\n", "cons": "For the regions such as WMH, EDEMA, and tumor, the proposed method achieved worse results than MultiUNet. it would be good to add an analysis of this part.\n\nThe proposed method requires the same data modality availability across several datasets which reduces the model input images to just one or two. This may represent a major drawback if the datasets depend on different types of image modalities. However, this problem may be outside the scope of this work.\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"]}], "comment_id": ["r1xEV8Do4V", "Syx54ivi4E", "HJeP3ovjEE", "SklYhnvi4V", "HkedfTPi4V"], "comment_cdate": [1549657643871, 1549658930310, 1549659055137, 1549659313026, 1549659408387], "comment_tcdate": [1549657643871, 1549658930310, 1549659055137, 1549659313026, 1549659408387], "comment_tmdate": [1555946002556, 1555946001370, 1555946001118, 1555946000900, 1555946000640], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper103/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper103/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper103/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper103/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper103/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Answer to AnonReviewer2", "comment": "We thank the reviewer for the constructive comments and for recommending our work to be invited for the MIDL Special Issue. In what follows, we answer in detail every point raised by the reviewer.\n\n--> \"1 . For the regions such as WMH, EDEMA, and tumor, the proposed method achieved worse results than MultiUNet. it would be good to add an analysis of this part.\"\n\nLet us highlight the fact that the single UNet model trained with the proposed ACE achieved equivalent (not worse) performance to the MultiUNet in WMH segmentation (no significant differences according to Wilcoxon test), better or equivalent performance in terms of brain tissue segmentation (depending on the brain structure) and only worse performance for edema and tumor. \n\nAs we explained in Appendix C of the submitted manuscript, the worse performance for edema and tumor is explained by the fact that the MultiUNet was trained using all available modalities while the single UNet was trained using only those modalities available in both, anatomical and lesion datasets. In case of edema and brain tumor, the Multi-UNet was trained with multiple MR modalities for the tumor segmentation task (it uses T1, T1g, T2 and FLAIR) while the single UNet was trained using only T1 images. All details about available MR modalities for every dataset are provided in Appendix C, where we also state that \u201cthis setting gives some advantages to the Multi-UNet model over the single model trained with ACE, since it uses more MR sequences for the lesion segmentation task. This is reflected in the results shown in Figure 3, specially for the brain lesion segmentation task, where the better performance shown by the Multi-UNet model with respecto to the single model trained with ACE can be explained by this difference in the number of sequences used to train them.\u201d\n\nWe agree with the reviewer in that this is an important point that needs to be discussed in the paper. We will therefore move this discussion from the Appendix to the section \u201cResults & Discussion\u201d in the main paper.\n\n--> \"2. The proposed method requires the same data modality availability across several datasets which reduces the model input images to just one or two. This may represent a major drawback if the datasets depend on different types of image modalities. However, this problem may be outside the scope of this work.\"\n\nWe agree with the reviewer in that this requirement may represent a limitation if the datasets depend on different types of image modalities. There are alternatives to deal with this issue (e.g. imputing the missing modalities by means of image synthesis or using ad-hoc techniques like the HeMIS (Hetero-Modal Image Segmentation) model by Havaei et al, MICCAI 2016 (https://arxiv.org/abs/1607.05194). However, as suggested by the reviewer, this topic is outside the scope of this work. We will add this short discussion as a limitation of our method in the camera ready version of the manuscript.\n\n\n\n\n\n"}, {"title": "Answer to AnonReviewer1 (Part 1 of 2)", "comment": "We thank the reviewer for the constructive comments and for acknowledging the novelty of the contribution. In what follows, we answer in detail every point raised by the reviewer. (Given the characters restrictions we divided our answer in 2 comments):\n\n--> \"1. What happens if the 'voxels that are not lesion background' are not penalized at all? I think it is important to compare this form of 'naive' adaptive loss to the proposed method.\"\n\nWe agree with the reviewer in that it is interesting to think about some 'naive' adaptive losses which just avoid penalizing certain cases. The case suggested by the reviewer where the 'voxels that **are not** lesion background' are not penalized at all, implies that lesion voxels are not going to be penalized at all when they are miss-classified. This will lead the network to never learn to classify lesions, which are actually the classes of interest for our problem. The other option would be to not penalize 'voxels that **are** lesion background', in which case we wouldn't be penalizing miss-classifications in voxels that could potentially be brain tissue or background (but we don't know what they are because they were labeled as lesion background). In this case, we would only penalize them when we are certain about their class. This is an interesting idea that we will explore as future work.\n\n--> \"2. The quantitative results for the brain tissue + WMH experiment with the naive dice loss seem to be at par with the multi-unet and the ACE, but the qualitative results are considerably worse. Is the case shown in the qualitative results an outlier for this setup?\"\n\nWe thank the reviewer for pointing this out and we apologize for this inconsistency in our results. We have corrected this figure for the camera ready version, since we found that the qualitative results shown for \u201cAnatomical + WMH\u201d row corresponded to a previous version of this figure associated to previous experiments (the other qualitative results corresponding to \u201cAnatomical + Tumor\u201d and all the quantitative results shown in the boxplot were correct). \n\n--> \"3. After first reading the problem statement (Sec. 2), I was a little unclear as to what is exactly meant by union of all label-sets. Perhaps a sentence to clarify this (saying that the union refers to a set where the background label is over-written if it is foreground in the other dataset) might be helpful.\"\n\nWe agree with the need of clarification the problem statement. We will add the following paragraph to the camera ready version in Page 4 (Sec. 2): \u201cNote that, since the new labelset \\mathcal{L} includes all labels from all datasets, some structures that were labeled as background in one dataset may be labeled as foreground in other datasets, raising the label contradiction problem shown in Figure 2. In these cases, the foreground labels (e.g. brain tissue labels) should prevail over the background labels in the final mask generated by the segmentation model.\u201d\n\n--> \"4. In Sec. 2.1, it is said that the mini-batches are sampled with equal probability from all datasets and all classes. As voxel-wise labels are predicted, how is it ensured that the mini-batches contain, on average, an equal number of voxels from each class?\"\n\nThis question is similar to that raised by the reviewer AnonReviewer3 regarding balanced sampling. In our patch sampling strategy, every mini-batch used at training time is composed of 7 image patches whose central pixel is guaranteed to be sampled with equal probability from all classes. Even if this does not not guarantee that an equal number of voxels from each class will be sampled, we make sure that at least a few voxels from every class will be seen during training with equal probability. Moreover, since we work with relatively small patches (32 x 32 x 32 pixels), we expect that a considerable part of the patch will be covered by voxels with the same class than that of the central voxel (even for small lesions like WMH). This strategy helps to alleviate the unbalancing effect and works well in practice. We will make it clear in the camera ready version of the manuscript. Note that the same strategy has been used before by the last author of this paper when training one of the U-Net models that constitute the Ensemble of Multiple Models and Architectures (EMMA), which obtained the first place in the brain tumor segmentation challenge BRATS 2017 (see page 7 of (Kamnitsas et al, 2017a): https://arxiv.org/pdf/1711.01468.pdf ).\n\n--> \"5. In the text following Eq. 4, consider using 'labelled as anything but lesion background' instead of 'non-lesion background'. In my opinion, this would be clearer.\"\n\nThanks for the suggestion. The proposed phrasing is in fact much clearer. We will change it in the camera ready version.\n\n--> \"6. Depicting the quantitative results in a table instead of the box plot might be better to appreciate the differences between the various methods.\"\n\nWe will include a table in the revised manuscript."}, {"title": "Continuation of the Answer to AnonReviewer1 (Part 2 of 2) ", "comment": "\n--> \"7. If possible, consider moving the qualitative results into the main text instead of the appendix.\"\n\nWe added the qualitative results in the appendix since we wanted to stick to 8 pages (as it was \u201cstrongly recommended\u201d by the organizers) . However, MIDL guidelines clearly state that \u201cThe appropriateness of using pages over the recommended page length will be judged by reviewers\u201d, so following the reviewer's recommendation will move the figure to the main text.\n\n--> \"8. Finally, I am not sure if it is necessary to give the multi-unet benchmark the advantage of additional modalities as described in Appendix C, although this only strengthens the benchmark. Providing all experiments with the same inputs would be a cleaner setup and would help focus entirely on the label contradiction issue.\"\n\nWe agree with the reviewer in that giving multi-UNet benchmark the advantage of additional modalities only strengthens the benchmark, but we also believe that it would be interesting (and fairer with the proposed model) to measure its performance when using exactly the same modalities. We will consider this as an additional experiment for a potential journal extension of this work.\n\n--> \"9. practise --> practice.\"\n\nWe fixed the typo.\n"}, {"title": "Answer to AnonReviewer 3 (Part 1 of 2)", "comment": "We thank the reviewer for the constructive comments and for recommending our work to be invited for the MIDL Special Issue. The reviewer acknowledges the fact that the paper is well written, well organized and that we are introducing a new loss function. In what follows, we answer in detail every point raised by the reviewer.\n\n--> \"1. The novelty of the paper is limited. The major contribution is a loss function, and the rest of paper adapted established works. And the overall performance is not convincing.\"\n\nWe agree with the reviewer in that the major contribution of our paper is a new loss function, that is used to train a model based on the well-known UNet architecture. However, we disagree in that the novelty of the paper is limited. The formulation of the adaptive cross-entropy is novel (this has also been acknowledged by AnonReviewer1). Moreover, to the best of our knowledge, the problem for which we have proposed the aforementioned loss function (namely joint learning of brain lesion and anatomy segmentation from heterogeneous datasets using CNNs) has not been formalized before by the MIC community. In the \u201cRelated Works\u201d section of our paper, we discussed the closest works we found in the literature, which are those of (Fourure et al, 2017) and (Rajchl et al, 2018), and highlight the differences with these problems and proposed approaches. We therefore believe that our paper has novelty not only in methodological terms (a new loss function based on cross-entropy) but also in terms of application (joint learning of brain lesion and anatomy from heterogeneous datasets).\n\nRegarding performance, see comments below about the experimental results in point 4.\n\n--> \"2. In the definition of the proposed loss function, it is not clear that the sum is inside of log function. Why not placing the sum outside the log function? Please further explain the loss function theoretically.\"\n\nWe thank the reviewer for pointing this out. We apologize for not being sufficiently clear in the motivation of the loss function. The reasoning behind having the sum inside the log function on the proposed adaptive cross entropy is to effectively unify those labels that are not lesion (i.e. background and brain tissue segmentations, which raise the label contradiction problem illustrated in Figure 2.a) in a unique class. We do that by assigning to this virtual class the sum of the scores the model assigned to each of those labels.\n\nThe alternative approach suggested by the reviewer (i.e. putting the sum outside the log function) would not reflect this behaviour, since it would penalize the classes independently. Moreover, if the model\u2019s prediction score for a lesion background voxel is equally distributed among the non-lesion labels (i.e. background and brain tissue segmentations), the computed loss would be higher than if the score is entirely assigned to only one of those labels, despite the fact that none of the predictions is preferable over the other given the information available (i.e. that the voxel is \u201canything but lesion\u201d).\n\nThis point was briefly discussed in the original manuscript (see Section 2.3) but we will further clarify it in the camera ready version.\n\n--> \"3. There might be issues with multi-class cross entropy function. Because the objects like tumor are very small, then the loss might be biased during training with unbalanced sampling. It is better to use weights for either cross-entropy or the proposed loss function.\"\n\nMulti-class cross-entropy is a standard loss function used in the context of brain lesion segmentation with CNNs. We understand the point raised by the reviewer and his concerns about class unbalance during sampling. However, it should be noted that we follow a patch-based training strategy that helps to alleviate the unbalancing effect that may affect cross-entropy in two ways. First, every mini-batch used at training time is composed of 7 image patches whose central pixel is guaranteed to be sampled with equal probability from all classes. Second, we use small image patches (of size 32 \u00d7 32 \u00d7 32 voxels) which reduce the unbalancing effect at the pixel level that small lesions may have when considering their size with respect to the full images. We will make sure that this is clear in the camera ready version. \n\nNote that the same sampling strategy has been used before by the last author of this paper when training one of the U-Net models that constitute the Ensemble of Multiple Models and Architectures (EMMA), which obtained the first place in the brain tumor segmentation challenge BRATS 2017 (see page 7 of (Kamnitsas et al, 2017a): https://arxiv.org/pdf/1711.01468.pdf ). Moreover, multi-class cross-entropy has been used to train some of the most influential papers in the literature of CNN-based brain lesion segmentation, like those by (Kamnitsas et al, 2017, Medical Image Analysis) and (Havaei et al, 2017, Medical Image Analysis). \n\n(It continuous in the next comment \u2026)\n"}, {"title": "Continuation of Answer to AnonReviewer3 (Part 2 of 2)", "comment": "(... continuation of point --> 3)\n\nHaving said that, we thank the reviewer for his suggestion about using weights for either cross-entropy or the proposed loss function. Even if standard multi-class cross-entropy worked well in practise for us, weighting it could be explored as an improvement to the proposed loss, and we will add it as a potential future work in the camera ready version of this work.\n\n--> \"4. The experimental results did not show significant improvement using the proposed loss function over the multi-UNet. Also, it would be nice to compare with the state-of-the-art brain tumor segmentation methods (e.g. Myronenko, A., 2018. 3D MRI brain tumor segmentation using autoencoder regularization. arXiv preprint arXiv:1810.11654) in the experiments.\"\n\nThe Multi-UNet was implemented to give an idea of what standard CNNs trained independently for every task can do in the problems tackled in this paper. Our main goal was not to improve over these results, but to show that we can achieve equivalent performance with a single model (with lower overall complexity). As we stated in Section 2.2, \u201cnote that the Multi-UNet model requires extra efforts at training time: we need to train a single model for every dataset, increasing not only the training time but also the overall model complexity, i.e. the number of learned parameters. Moreover, at test time, every model is evaluated on the test image and a label fusion strategy must be applied to combine the multiple predictions.\u201d. \n\nTherefore, one of the main contributions of our work, is showing that we can train a single model for brain lesion and anatomy segmentation from heterogeneous datasets, with equivalent performance to multiple models trained with single datasets, but reducing training time and overall model complexity. Moreover, as discussed in the Appendix C, we gave Multi-UNet the advantage of using all-available MR modalities for every task, while the single UNet with adaptive cross entropy was trained using only those modalities available for both tasks. As suggested by AnonReviewer1 \u201cthis only strengthens the benchmark\u201d. We believe that achieving equivalent performance to that of Multi-UNet (and even better in some cases like CSF and Grey Matter in the \u201cBrain Tissue + WMH scenario\u201d) using a single model trained with Adaptive Cross Entropy is a really interesting finding.\n\nRegarding comparison with state-of-the-art brain tumor segmentation methods, note that we do not focus in a particular task like brain tumor, but we provide evaluation in two different lesion scenarios (brain tumor and WMH). We agree in that it would be nice to compare with the best segmentation method for every task, but this lies out of the scope of our work.\n\n--> \"5. What would be the outcome when considering all the datasets (brain tissue, WMH, and brain tumor) as one scenario?\"\n\nThis is a very interesting point. We have discussed this idea since we believe that it would further strengthen the validation of the method, and we consider the general case of training a single model to perform an arbitrary number of tasks to be the natural extension of our work.\n\nHowever, there is one main reason that stopped us from directly applying the proposed model to the case of combined segmentation of brain tissue, WMH and brain tumors: to the best of our knowledge, there is no publicly available dataset with these three combined labels to use as ground truth for evaluating the performance of such model. \n\nOne option to overcome this problem would be to assume that the two types of lesion never coexist in the same brain, and to use the same datasets that were used for testing in this work (brain tissue + WMH and brain tissue + brain tumor) to evaluate segmentation performance. We do not consider this assumption to be sufficiently realistic, and even if the assumption held for the used datasets, this would result in a limited evaluation scenario where we never confront the model with images that contain all three types of labels, thus not really exploring the most interesting case when the model is expected to produce all of them.\n\n--> \"6. Please compare with the performance using other segmentation networks to justify the advantage introduced by the new loss function.\"\n\nWe agree with the reviewer in that performing the same experimental analysis in other architectures (like Deepmedic for example) would strengthen the work and better justify the advantage introduced by the new loss function. We are currently considering this experiment as an extension for a potential journal version of this paper, and we will add this as future work when submitting the camera ready version of this work.\n"}], "comment_replyto": ["H1exCZfnXN", "Bke7FMqhX4", "Syx54ivi4E", "BkeeerOTmV", "SklYhnvi4V"], "comment_url": ["https://openreview.net/forum?id=Syest0rxlN¬eId=r1xEV8Do4V", "https://openreview.net/forum?id=Syest0rxlN¬eId=Syx54ivi4E", "https://openreview.net/forum?id=Syest0rxlN¬eId=HJeP3ovjEE", "https://openreview.net/forum?id=Syest0rxlN¬eId=SklYhnvi4V", "https://openreview.net/forum?id=Syest0rxlN¬eId=HkedfTPi4V"], "meta_review_cdate": 1551356597935, "meta_review_tcdate": 1551356597935, "meta_review_tmdate": 1551881983389, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This paper introduces an adaptive cross entropy loss function to segment brain lesions from diverse datasets. There was some concern about the overall novelty of the paper; however, the authors did a good job addressing this point in their response. Two of three reviewers have recommended the paper for acceptance to MIDL. I concur with this recommendation. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Syest0rxlN¬eId=S1eRhG8HLN"], "decision": "Accept"} |