Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 141,869 Bytes
fad35ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
paper_id,title,keywords,abstract,meta_review
1,"""Training CNNs for Multimodal Glioma Segmentation with Missing MR Modalities""","['convolutional neural network', 'glioma segmentation', 'multimodal']","""Missing data is a common problem in machine learning, and in retrospective imaging research it is often encountered in the form of missing imaging modalities. We propose to take into account missing modalities in the design and training of neural networks, to ensure that they are capable of providing the best possible prediction even when one of the modalities is not available. This would enable algorithms to be applied to subjects with fewer available modalities, without leaving out the same information in other subjects or applying data imputation. This concept is evaluated in the context of glioma segmentation, which is a problem that has received much attention in part due to the BraTS multi-modal segmentation challenge. The UNet architecture has been shown to be effective in this problem and therefore it serves as the reference method in this paper. To make the network robust to missing data we leveraged the dropout principle during training and applied this to the UNet architecture, but also to variations on the UNet architecture inspired by multimodal learning. These networks drastically improved the performance with missing modalities, while only performing slightly worse on the full dataset.""","""The paper addresses the problem of brain tumor segmentation when image modalities are missing during inference - a common problem when details of imaging protocols differ in between centers. The reviewers acknowledge the relevance of the problem, and that the paper is well written. They question the novelty of the work, though, and seem to expect a deeper empirical analysis to credit this study for its evaluation alone. The authors offer some of this added information in the rebuttal, presenting performances for scenarios where two or three modalities are missing. Two of the reviewers recommend to reject the paper, one is positive about it. I side with the first two. I feel the results presented in the rebuttal are remarkable (with T2, FLAIR, T1gad missing, the presented approach reaches almost perfect scores, while standard Unets fail), but I would agree with R1 that more than just one segmentation task should be looked at: the ""whole tumor"" segmentation the authors choose is fairly simple, it might have been insightful to also learn about the ""tumor core"" and ""active tumor"" tasks, or to see results for a different data set. Once the authors are able to show consistent results on a slightly larger set of tasks/applications and in comparison to a few other baseline methods, I feel this will be a very valid study. """
2,"""Learning with Multitask Adversaries using Weakly Labelled Data for Semantic Segmentation in Retinal Images""","['Adversarial learning', 'convolutional neural networks', 'multitask learning', 'semantic segmentation', 'retinal image analysis']","""A prime challenge in building data driven inference models is the unavailability of statistically significant amount of labelled data. Since datasets are typically designed for a specific purpose, and accordingly are weakly labelled for only a single class instead of being exhaustively annotated. Despite there being multiple datasets which cumulatively represents a large corpus, their weak labelling poses challenge for direct use. In case of retinal images, they have inspired development of data driven learning based algorithms for segmenting anatomical landmarks like vessels and optic disc as well as pathologies like microaneurysms, hemorrhages, hard exudates and soft exudates. The aspiration is to learn to segment all such classes using only a single fully convolutional neural network (FCN), while the challenge being that there is no single training dataset with all classes annotated. We solve this problem by training a single network using separate weakly labelled datasets. Essentially we use an adversarial learning approach in addition to the classically employed objective of distortion loss minimization for semantic segmentation using FCN, where the objectives of discriminators are to learn to (a) predict which of the classes are actually present in the input fundus image, and (b) distinguish between manual annotations vs. segmented results for each of the classes. The first discriminator works to enforce the network to segment those classes which are present in the fundus image although may not have been annotated i.e. all retinal images have vessels while pathology datasets may not have annotated them in the dataset. The second discriminator contributes to making the segmentation result as realistic as possible. We experimentally demonstrate using weakly labelled datasets of DRIVE containing only annotations of vessels and IDRiD containing annotations for lesions and optic disc. Our method using a single FCN achieves competitive results over prior art for either vessel or optic disk or pathology segmentation on these datasets.""","""While there are often different annotated datasets from the same image modality available, they often include annotations of different classes, covering different regions of the anatomy that is shown in the images. Datasets that include annotations of all regions of interest are rare and costly to acquire. The authors propose a method to address this challenging problem by proposing to learn semantic segmentation models from multiple datasets that might have only different parts of the classes annotated. An adversarial loss is proposed that allows the model to leverage complementary datasets while not requiring to have all regions of interest annotated in all of them. All reviewers agree that the paper is interesting and addresses an important challenge in medical imaging. The proposed solution could potentially have a high impact to the field if proving to be generalizable to other datasets in future research. The major concerns of the reviewers have been addressed by the authors rebuttal. The criticism about the comparison to the challenge leaderboard is unproblematic. In my opinion, the comparison is valid and adds value to the paper. """
3,"""Exploring local rotation invariance in 3D CNNs with steerable filters""","['Local rotation invariance', 'convolutional neural network', 'steerable filters', '3D texture']","""Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. We use trainable 3D steerable filters in CNNs in order to obtain LRI with directional sensitivity, i.e. non-isotropic filters. Pooling across orientation channels after the first convolution layer releases the constraint on finite rotation groups as assumed in several recent works. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations. We only convolve the input volume with a set of Spherical Harmonics (SHs) modulated by trainable radial supports and directly steer the responses, resulting in a drastic reduction of trainable parameters and of convolution operations, as well as avoiding approximations due to interpolation of rotated kernels. The proposed method is evaluated and compared to standard CNNs on 3D texture datasets including synthetic volumes with rotated patterns and pulmonary nodule classification in CT. The results show the importance of LRI in CNNs and the need for a fine rotation sampling.""","""3D CNNs generally involve a large number of parameters, which leads to practical limitations in their applicability. This work demonstrates how the number of parameters can be substantially reduced, while maintaining similar accuracy, by building local rotation invariance, a property that is frequently required in medical image analysis, into the network architecture. I propose to follow the recommendation of the reviewers, who were all quite excited about this approach (three times ""strong accept""). However, I would welcome it if, in the final version, the authors could address the question to which extent the reduction in the number of parameters indeed goes along with saving time and memory. It seemed to me that their method involves a non-negligible computational overhead, which I expect to at least partly negate its benefits in practice."""
4,"""Scalable Neural Architecture Search for 3D Medical Image Segmentation""","['AutoML', 'Neural Architecture Search', 'Medical Image Segmentation']","""In this paper, a neural architecture search (NAS) framework is formulated for 3D medical image segmentation, to automatically optimize a neural architecture from a large design space. For this, a novel NAS framework is proposed to produce the structure of each layer including neural connectivities and operation types in both of the encoder and decoder of a target 3D U-Net. In the proposed NAS framework, having a sufficiently large search space is important in generating an improved network architecture, however optimizing over such a large space is difficult due to the extremely large memory usage and the long run-time originated from high-resolution 3D medical images. Therefore, a novel stochastic sampling algorithm based on the continuous relaxation on the discrete architecture parameters is also proposed for scalable joint optimization of both of the architecture parameters and the neural operation parameters. This makes it possible to maintain a large search space with small computational cost as well as to obtain an unbiased architecture by reducing the discrepancy between the training-time and test-time architectures. On the 3D medical image segmentation tasks with a benchmark dataset, an automatically designed 3D U-Net by the proposed NAS framework outperforms the previous human-designed 3D U-Net as well as the randomly designed 3D U-Net, and moreover this optimized architecture is more compact and also well suited to be transferred for similar but different tasks.""","""The general idea of optimising model the hyper-parameters (number of channels, layers etc) of a deep fully-convolutional segmentation network automatically in an efficient way is an important research area. The reviewers all find some interest in this work, which formulates the architecture search as stochastic sampling using a continuous relaxation. In addition to some minor comments about the details of the presentation and writing, their main criticism revolves around the fact that 1) the automatic search only marginally improves the baseline model and 2) this baseline is substantially worse than state-of-the-art pipelines applied to the medical decathlon challenge. The authors rebut this with two points: first, a reduction in floating point operations is reached for the optimised model and second, data augmentation and ensembling are not yet used. None of the reviewers changed their scores after the rebuttal and I am also convinced that this response is only partially valid. 1) When making a point of reduced floating point operations, pruning methods (cf. e.g. Deep Neural Network Compression by In-Parallel Pruning-Quantization PAMI '19) would be a natural choice as a comparison, which have been shown to also yield moderate accuracy improvements, but those are not mentioned. 2) the influence of data augmentation (and to a lesser degree ensembling) on the hyper-parameter choice cannot be fully excluded in order to show that NAS improves final performance. With that in mind, it is hard to justify training times of 64 days of V100 GPUs for one model and a fairly small gain (which we cannot be sure remains after augmentation), so it would be of greater impact to demonstrate an even more efficient sampling strategy. Hence, despite some nice ideas and large-scale experiments, the negatives prevail in my opinion and I do not recommend acceptance at this point."""
5,"""Deep Learning Approach to Semantic Segmentation in 3D Point Cloud Intra-oral Scans of Teeth""","['Deep learning', '3D point cloud', 'intra-oral scan', 'semantic segmentation']","""Accurate segmentation of data, derived from intra-oral scans (IOS), is a crucial step in a computer-aided design (CAD) system for many clinical tasks, such as implantology and orthodontics in modern dentistry. In order to reach the highest possible quality, a segmentation model may process a point cloud derived from an IOS in its highest available spatial resolution, especially for performing a valid analysis in finely detailed regions such as the curvatures in border lines between two teeth. In this paper, we propose an end-to-end deep learning framework for semantic segmentation of individual teeth as well as the gingiva from point clouds representing IOS. By introducing a non-uniform resampling technique, our proposed model is trained and deployed on the highest available spatial resolution where it learns the local fine details along with the global coarse structure of IOS. Furthermore, the point-wise cross-entropy loss for semantic segmentation of a point cloud is an ill-posed problem, since the relative geometrical structures between the instances (e.g. the teeth) are not formulated. By training a secondary simple network as a discriminator in an adversarial setting and penalizing unrealistic arrangements of assigned labels to the teeth on the dental arch, we improve the segmentation results considerably. Hence, a heavy post-processing stage for relational and dependency modeling (e.g. iterative energy minimization of a constructed graph) is not required anymore. Our experiments show that the proposed approach improves the performance of our baseline network and outperforms the state-of-the-art networks by achieving 0.94 IOU score.""","""Reviewers have recognized a well-motivated paper facing an important and interesting application. There were some concerns raised about the novelty of the contribution, which were answered to some extent by the authors, putting forward a novel non-uniform resampling mechanism. The final version of the paper should clarify both the novelty with respect to prior work in adversarial segmentation and the positioning of the paper regarding the end-to-end vs handcrafted feature approaches. """
6,"""Adversarial Pseudo Healthy Synthesis Needs Pathology Factorization""","['pseudo healthy synthesis', 'GAN', 'cycle-consistency', 'factorization']","""Pseudo healthy synthesis, i.e. the creation of a subject-specific `healthy' image from a pathological one, could be helpful in tasks such as anomaly detection, understanding changes induced by pathology and disease or even as data augmentation. We treat this task as a factor decomposition problem: we aim to separate what appears to be healthy and where disease is (as a map). The two factors are then recombined (by a network) to reconstruct the input disease image. We train our models in an adversarial way using either paired or unpaired settings, where we pair disease images and maps (as segmentation masks) when available. We quantitatively evaluate the quality of pseudo healthy images. We show in a series of experiments, performed in ISLES and BraTS datasets, that our method is better than conditional GAN and CycleGAN, highlighting challenges in using adversarial methods in the image translation task of pseudo healthy image generation.""","""Reviewers agree on acceptance and high quality/originality. Rebuttals are solid and supported with citations. Pros include: well-written, interesting, good results Cons include: some artifacts in results, which authors are working on now and is known in the field One reviewer recommended Oral Presentation. As this is the top paper in my stack I will recommend accept as oral presentation."""
7,"""A Hybrid, Dual Domain, Cascade of Convolutional Neural Networks for Magnetic Resonance Image Reconstruction""","['Magnetic resonance imaging', 'image reconstruction', 'compressed sensing', 'deep learning']","""Deep-learning-based magnetic resonance (MR) imaging reconstruction techniques have the potential to accelerate MR image acquisition by reconstructing in real-time clinical quality images from k-spaces sampled at rates lower than specified by the Nyquist-Shannon sampling theorem, which is known as compressed sensing. In the past few years, several deep learning network architectures have been proposed for MR compressed sensing reconstruction. After examining the successful elements in these network architectures, we propose a hybrid frequency-/image-domain cascade of convolutional neural networks intercalated with data consistency layers that is trained end-to-end for compressed sensing reconstruction of MR images. We compare our method with five recently published deep learning-based methods using MR raw data. Our results indicate that our architecture improvements were statistically significant (Wilcoxon signed-rank test, p<0.05). Visual assessment of the images reconstructed confirm that our method outputs images similar to the fully sampled reconstruction reference. ""","""All the reviewers agree that this paper is a strong and well-written contribution providing valuable insights into the optimal neural network architecture for MR reconstruction and a comprehensive evaluation with many recent related works. The reviewers have pointed out that the methods were evaluated using an oversimplified undersampling pattern and that there is a small amount of novelty compared to one of the related works. Overall the strengths of the paper overweight its few weaknesses. """
8,"""Dense Segmentation in Selected Dimensions: Application to Retinal Optical Coherence Tomography""","['Segmentation', 'Retina', 'OCT']","""We present a novel convolutional neural network architecture designed for dense segmentation in a subset of the dimensions of the input data. The architecture takes an N-dimensional image as input, and produces a label for every pixel in M output dimensions, where 0< M < N. Large context is incorporated by an encoder-decoder structure, while funneling shortcut subnetworks provide precise localization. We demonstrate applicability of the architecture on two problems in retinal optical coherence tomography: segmentation of geographic atrophy and segmentation of retinal layers. Performance is compared against two baseline methods, that leave out either the encoder-decoder structure or the shortcut subnetworks. For segmentation of geographic atrophy, an average Dice score of 0.490.21 was obtained, compared to 0.460.22 and 0.280.19 for the baseline methods, respectively. For the layer-segmentation task, the proposed architecture achieved a mean absolute error of 1.3050.547 pixels compared to 1.9670.841 and 2.1660.886 for the baseline methods.""","""The work addresses a gap in the state of the art in addressing 2D->1D semantic segmentation problems, and in general the scenarios where the output dimension is smaller than the input one. There is a clear applicability of this approach outside the domain of retinal OCT and thus expected to be of interest to MIDL community. The reviewers and AC agree that the work and the proposed network are original and the experiments of good quality."""
9,"""Prediction of Progression in Multiple Sclerosis Patients""",[],"""We present the first automatic end-to-end deep learning framework for the prediction of future patient disability progression (one year from baseline) based on multi-modal brain Magnetic Resonance Images (MRI) of patients with Multiple Sclerosis (MS). The model uses parallel convolutional pathways, an idea introduced by the popular Inception net and is trained and tested on two large proprietary, multi-scanner, multi-center, clinical trial datasets of patients with Relapsing-Remitting Multiple Sclerosis (RRMS). Experiments on 465 patients on the placebo arms of the trials indicate that the model can accurately predict future disease progression, measured by a sustained increase in the extended disability status scale (EDSS) score over time. Using only the multi-modal MRI provided at baseline, the model achieves an AUC of 0.66 +- 0.055. However, when supplemental lesion label masks are provided as inputs as well, the AUC increases to 0.701 +- 0.027. Furthermore, we demonstrate that uncertainty estimates based on Monte Carlo dropout sample variance correlate with errors made by the model. Clinicians provided with the predictions computed by the model can therefore use the associated uncertainty estimates to assess which scans require further examination.""","""All three reviewers agree that this submission tackles a clinically important task on a decently large dataset and presents a solid framework for solving it. The submission is well written and well organized. Some of the methodological contributions seem to have been lost in the paper, but all reviewers agree that those can be addressed in the final version and the submission is worth accepting. The authors have extensively responded to the reviewers' questions and addressed most of the suggested changes/modifications."""
10,"""Neural Processes Mixed-Effect Models for Deep Normative Modeling of Clinical Neuroimaging Data""","['Neural Processes', 'Mixed-Effect Modeling', 'Deep Learning', 'Clinical Neuroimaging']","""Normative modeling has recently been introduced as a promising approach for modeling variation of neuroimaging measures across individuals in order to derive biomarkers of psychiatric disorders. Current implementations rely on Gaussian process regression, which provides coherent estimates of uncertainty needed for the method but also suffers from drawbacks including poor scaling to large datasets and a reliance on fixed parametric kernels. In this paper, we propose a deep normative modeling framework based on neural processes (NPs) to solve these problems. To achieve this, we define a stochastic process formulation for mixed-effect models and show how NPs can be adopted for spatially structured mixed-effect modeling of neuroimaging data. This enables us to learn optimal feature representations and covariance structure for the random-effect and noise via global latent variables. In this scheme, predictive uncertainty can be approximated by sampling from the distribution of these global latent variables. On a publicly available clinical fMRI dataset, we compare the novelty detection performance of multivariate normative models estimated by the proposed NP approach to a baseline multi-task Gaussian process regression approach and show substantial improvements for certain diagnostic problems.""","""The work proposes using Neural Processes (NPs) in place of the previously used Gaussian Processes (GPs) in a framework for normative modelling and abnormality detection of psychiatric disorders. This is motivated by the capabilities of NPs to avoid computational complexity of GPs, and flexibility of learning the kernels. The main Pros and Cons identified by Reviewers and AC can be summarised as follows: Pros: - Introduction of NPs in place of GPs in for modelling mixed effects is technically novel (all reviewers). - Potentially interesting and useful direction of research (all reviewers) - The work is mostly well written, acknowledged by R1 & R2 (and R3 except Sec 2.1/2.2). - Presentation, format and figures are of good quality. - Reproducibility seems high, as data and code are public (R1 & R3). Cons: - R2 & R3 (AC agrees) that the work would be much more accessible from better explanations of important concepts that are important for the work (NPs, GEVD, NM,etc). Authors committed to addressing them by adjusting the text. - Significance of results and clarity of experimental settings: All reviewers raised questions about the evaluation & results. Authors have clarified these points sufficiently in the responses and have committed to altering the text accordingly. I would like to emphasize that adding performance of a supervised approach within the main text, even with just a sentence, would help convey to the reader *much* better the significance of the results by complementing the picture about the difficulty of the problem. R1 & R3 (AC agrees) raised that one of the main motivations for using NPs over GPs, the low computational complexity with respect to N (samples) when N>T (T is dimensionality of feature space), is not sufficiently discussed. The text both uses it as motivation (Sec.1) and discusses it as a strong point (Sec.6). The problem is that within the context of the work, N << T. Response of reviewers was not sufficient (especially arguing it is out of scope). As this is one of the two main motivations for using NPs over GPs, the text should definitely discuss it. The point should be addressed by extending Sec.6 with a few sentences, clarifying the conditions were these gains are expected, and, even if not on this dataset, then mention applications it is expected. - R1 & R2 raised the point that defining mixed-effects models as stochastic processes may be *unnecessary*, overcomplicating Sec 2.2. Authors replied it is, so that using NPs makes sense. AC sees how this can be confusing for the readers, since GPs (stochastic processes) have been previously used to model mixed-effect models (including in authors work [Kia et al, arxiv 18]), thus one could assume 1st half of Sec2 for granted and instead focus on the 2nd half of Sec 2 that introduces NPs instead of GPs. The authors should clarify in text what this derivation adds in comparison to previous works that already modelled it with GPs (they never formalised it?), which would help avoid confusion. Additional major point by ACs Meta-Review that needs addressing: To approximate a stochastic process, here mixed-effects, the network *architecture* used for NPs is required to *also* fulfil the properties of exchangeability and consistency. In [Garnello 2018b] exchangeability is assured by the use of aggregator in the model, adding up features from M samples (addition is invariant to permutations). This is not satisfied in the current architecture, where M samples are given to different channels (I guess it still works because when shown different permutations, the model learns to be invariant, but its a major design difference and should be discussed). Additionally, consistency in the original work is done by training with *varying M* during a training session (aggregator enables this). The current work only trains a model with a constant M, thus does not have this necessary property. This is only visible in the code currently, not text. Define NP(M) in Sec 3 and state that its trained with constant M, opposite to original work. The authors *must* make the above differences explicit in the main text, as they are very big differences in comparison to the original model. Future works should address them and compare design choices. The decision on this work is difficult. The reviewers have collectively acknowledged the work as novel, interesting and potentially useful. At the same time, it is clear that the work loses value by being insufficiently accessible and unclear about important points of the methodology. It seems most points raised by the two reviewers that recommended rejection are addressable by appropriate text alterations, which the authors have committed to perform. Provided that all points I emphasized above are well addressed by the authors, I think the work will be of sufficient quality for publication. """
11,"""A novel segmentation framework for uveal melanoma based on magnetic resonance imaging and class activation maps""",[],"""An automatic and accurate eye tumor segmentation from Magnetic Resonance images (MRI) could have a great clinical contribution for the purpose of diagnosis and treatment planning of intra-ocular cancer. For instance, the characterization of uveal melanoma (UM) tumors would allow the integration of 3D information for the radiotherapy and would also support further radiomics studies. In this work, we tackle two major challenges of UM segmentation: 1) the high heterogeneity of tumor characterization in respect to location, size and appearance and, 2) the difficulty in obtaining ground-truth delineations of medical experts for training. We propose a thorough segmentation pipeline consisting of a combination of two Convolutional Neural Networks (CNN). First, we consider the class activation maps (CAM) output from a Resnet classification model and the combination of Dense Conditional Random Field (CRF) with a prior information of sclera and lens from an Active Shape Model (ASM) to automatically extract the tumor location for all MRIs. Then, these immediate results will be inputted into a 2D-Unet CNN whereby using four encoder and decoder layers to produce the tumor segmentation. A clinical data set of 1.5T T1-w and T2-w images of 28 healthy eyes and 24 UM patients is used for validation. We show experimentally in two different MRI sequences that our weakly 2D- Unet approach outperforms previous state-of-the-art methods for tumor segmentation and that it achieves equivalent accuracy as when manual labels are used for training. These results are promising for further large-scale analysis and for introducing 3D ocular tumor information in the therapy planning.""","""The reviewers agree on the novelty and quality of the paper. In spite of the very positive evaluations, all reviewers have identified a number of issues, which have been very appropriately addressed in the rebuttal. I agree with the authors that - even though interesting - adding the activation maps directly to the network might be left for future work, but it would be important to clarify the issues regarding the figures (Reviewer 1) and performance (Reviewers 1 and 2)."""
12,"""Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning""",[],"""We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor. We address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAEs. The proposed models improve classification accuracy by 22% (to 90%) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular fingerprints effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.""","""The motivation of the manuscript is to develop a method able to visually represent cells with high fidelity while also having an accurate latent space of all the images (embeddings) that allows for detecting the mechanisms of action of the chemical used to treat cells. The variational autorencoder (VAE) method is proposed to learn representations of cell images for cell profiling with adversarial similarity constraint and progressive training procedure. The problem tackled in the manuscript is interesting as imaging cell variations holds potential to learn representations which are predictive of function. The main difference between the work of Larsen et al. 2016 and this one, is the definition of loss functions: Instead of integrating the loss function of a generative adversarial networks (GAN) into the VAE loss function, the loss of the VAE and the loss of the discriminator are combined in a way that they complement each other. Pros: - The work is well motivated: The motivation for using the proposed model is that they want to encode single-cell features and simultaneously have the ability of generating realistic samples conditioned on variations of these features. - They have modified the loss function of the original VAEGAN and they have applied adversarial loss at multiple layers of the discriminator to obtain more realistic reconstruction results. The idea of progressive training is novel. - In the manuscript the differences in the reconstructions obtained by AE vs. VAE are discussed. This is interesting as they could provide insight into what is different about the models and what are the image features captured. - With the proposed approach, the authors have obtained a good balance between both tasks: accurate detection of mechanisms of action of the chemical used to treat the cells and realistic reconstruction of images which are more accurate than the ones obtained when using GANs. - The manuscript provides an update review of state-of-the-art methods, which are taken into account in the proposed methodology. Most of the text is written in a clear way; the problem to solve is well illustrated, each of the procedures followed in this work is either described in detail or cited properly, and the results are exposed concisely. - They are working in a clear and standalone version of the code and a link will be include in the revised manuscript. Cons: - While both LVAE and LDi are defined, a final complete expression in which it can be seen how both functions are combined is missing. The authors would clarify this point in the camera ready manuscript. - The authors compared the proposed method with AE and VAE, but VAEGAN, cited as Larsen et al. (2016) in the paper, is also a related method and should be compared with. The authors agree that extending to methods that involve prior sampling are potential valuable extensions. They plan to add a statement in the conclusion although it seems that they do not plan to compare their approach with VAEGAN. - The proposed approach does not perform better that other methods reported in the literature (Singh et al. and Ando et al.). The authors proposed to include the results from this methods in the evaluation. They will also highlight the fact that being based on engineering features and transfer learning, respectively, they do not provide visualization capabilities, although they do have a greater ability to classify the compounds mechanism of action. - The evaluation does not report variance. The authors are currently re-training the models several times to obtain the variance of the performance based on the training/initialization randomness. - """
13,"""Assessing Knee OA Severity with CNN attention-based end-to-end architectures""","['Convolutional Neural Network', 'End-to-end Architecture', 'Attention Algorithms', 'Medical Imaging', 'Knee Osteoarthritis']","""This work proposes a novel end-to-end convolutional neural network (CNN) architecture to automatically quantify the severity of knee osteoarthritis (OA) using X-Ray images, which incorporates trainable attention modules acting as unsupervised fine-grained detectors of the region of interest (ROI). The proposed attention modules can be applied at different levels and scales across any CNN pipeline helping the network to learn relevant attention patterns over the most informative parts of the image at different resolutions. We test the proposed attention mechanism on existing state-of-the-art CNN architectures as our base models, achieving promising results on the benchmark knee OA datasets from the osteoarthritis initiative (OAI) and multicenter osteoarthritis study (MOST). All the codes from our experiments will be publicly available on the github repository: pseudo-url ""","""The reviewers commented positively on the motivation to combine classification and localization into one step, on the evaluation of how best to make use of attention modules and on the clinical motivation of the work. However, the reviewers have pointed out insufficient clarity in the text and the overlengths of the manuscript. Furthermore, two of the reviewers mentioned that the evaluation appears lacking key comparisons and that the conclusions are heuristic. I believe that the points regarding clarity and lengths can be addressed for the camera ready version. The unclarities about the evalation cast doubt on the merit of the proposed method prevent this from being a top submission. Nevertheless, I follow the recommendation of 2/3 reviewers to accept this paper with a poster presentation. """
14,"""Generative Image Translation for Data Augmentation of Bone Lesion Pathology""","['Bone lesion', 'X-ray', 'generative models', 'data augmentation']","""Insufficient training data and severe class imbalance are often limiting factors when develop- ing machine learning models for the classification of rare diseases. In this work, we address the problem of classifying bone lesions from X-ray images by increasing the small number of positive samples in the training set. We propose a generative data augmentation approach based on a cycle-consistent generative adversarial network that synthesizes bone lesions on images without pathology. We pose the generative task as an image-patch translation problem that we optimize specifically for distinct bones (humerus, tibia, femur). In experi- mental results, we confirm that the described method mitigates the class imbalance problem in the binary classification task of bone lesion detection. We show that the augmented training sets enable the training of superior classifiers achieving better performance on a held-out test set. Additionally, we demonstrate the feasibility of transfer learning and apply a generative model that was trained on one body part to another.""","""As pointed out by the reviewers, there are several important aspects of the method that are not well explored. I'd suggest the authors applying important aspects of the reviewers' comments. There are no comparisons with recent state-of-the-art. The loss function in Eq. (4) has several different components. A comprehensive ablation study is missing!"""
15,"""CT-To-MR Conditional Generative Adversarial Networks for Improved Stroke Lesion Segmentation""","['Conditional adversarial networks', 'Image-to-Image translation', 'Ischemic stroke lesion segmentation', 'CT perfusion']","""Infarcted brain tissue resulting from acute stroke readily shows up as hyperintense regions within diffusion-weighted magnetic resonance imaging (DWI). It has also been proposed that computed tomography perfusion (CTP) could alternatively be used to triage stroke patients, given improvements in speed and availability, as well as reduced cost. However, CTP has a lower signal to noise ratio compared to MR. In this work, we investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. We detail the architectures of the generator and discriminator and describe the training process used to perform image-to-image translation from multi-modal CT perfusion maps to diffusion weighted MR outputs. We evaluate the results both qualitatively by visual comparison of generated MR to ground truth, as well as quantitatively by training fully convolutional neural networks that make use of generated MR data inputs to perform ischemic stroke lesion segmentation. We show that segmentation networks trained with generated CT-to-MR inputs are able to outperform networks that make use of only CT perfusion input.""","""Overall the reviewers commend the paper for its good clinical motivation and clear writing style. However, all reviewers reject the paper for the following main reasons: * Missing methodological novelty since it is a straight-forward application of the pix2pix framework * Only marginal improvements over compared techniques with very large standard deviations * Qualitative example segmentations which do not seem to reflect the quantitative results with some of the reviewers suggesting they may have been selectively chosen. Furthermore, the authors did not submit any rebuttals. I thus follow the recommendation of the reviewers to reject the paper. """
16,"""Digitally Stained Confocal Microscopy through Deep Learning""","['Deep learning', 'Neural Networks', 'Digital Staining', 'Confocal Microscopy', 'Speckle Noise', 'CycleGAN']","""Specialists have used confocal microscopy in the ex-vivo modality to identify Basal Cell Carcinoma tumors with an overall sensitivity of 96.6% and specificity of 89.2% (Chung et al., 2004). However, this technology hasnt established yet in the standard clinical practice because most pathologists lack the knowledge to interpret its output. In this paper we propose a combination of deep learning and computer vision techniques to digitally stain confocal microscopy images into H&E-like slides, enabling pathologists to interpret these images without specific training. We use a fully convolutional neural network with a multiplicative residual connection to denoise the confocal microscopy images, and then stain them using a Cycle Consistency Generative Adversarial Network.""","""The authors have done a good job with the rebuttal to the comments of the reviewers. I find most of their explanation convincing and the promised addition to the paper adequate. In addition, two out of the three reviewers recommend the paper for acceptance, and as such I also recommend acceptance."""
17,"""Dynamic MRI Reconstruction with Motion-Guided Network""","['Dynamic MRI reconstruction', 'Motion estimation and compensation', 'Optical flow']","""Temporal correlation in dynamic magnetic resonance imaging (MRI), such as cardiac MRI, is informative and important to understand motion mechanisms of body regions. Modeling such information into the MRI reconstruction process produces temporally coherent image sequence and reduces imaging artifacts and blurring. However, existing deep learning based approaches neglect motion information during the reconstruction procedure, while traditional motion-guided methods are hindered by heuristic parameter tuning and long inference time. We propose a novel dynamic MRI reconstruction approach called MODRN that unitizes deep neural networks with motion information to improve reconstruction quality. The central idea is to decompose the motion-guided optimization problem of dynamic MRI reconstruction into three components: dynamic reconstruction, motion estimation and motion compensation. Extensive experiments have demonstrated the effectiveness of our proposed approach compared to other state-of-the-art approaches.""","""In response to the reviewers' comments, the authors have provided further experimental and architectural details, and reported cross-validation results. Additionally, the discussion between the authors and R#3 highlight some possible extensions for the proposed framework, such as (I) use of DC block, (II) training DRN and MEMC components end-to-end, and (III) use of complex images. Based on the comments below, I recommend acceptance of the manuscript. If the authors consider submitting an extension of their work, then I suggest that they include the references suggested by R#3 and R#1 (e.g. motion estimation in inverse problems (CVPR))."""
18,"""CNN-based segmentation with a semi-supervised approach for automatic cortical sulci recognition""","['CNN', 'segmentation', 'semi-supervision', 'cortical sulci']","""Despite the impressive results of deep learning models in computer vision, these techniques have difficulty achieving such high performance in medical imaging. Indeed, two challenges are inherent in this domain: the rarity of labelled images, while deep learning methods are known to be extremely data intensive, and the large size of images, generally in 3D, which considerably increases the need for computing power. To overcome these two challenges, we choose to use a simple CNN that tries to classify the central voxel of a 3D patch given to it as an input, while exploiting a large unlabelled database for pretraining. Thus, the use of patches limits the size of the neural network and the introduction of unlabelled images increases the amount of data used to feed the network. This semi-supervised approach is applied to the recognition of the cortical sulci: this problem is particularly challenging because it contains as many structures to be recognized as labelled subjects, i.e. only about sixty, and these structures are extremely variable. The results show a significant improvement compared to the BrainVISA model, the most used sulcus recognition toolbox.""","""Paper 101 - Rejection, due to issues on clarity, lack of methodological novelty This paper proposes semi-supervision of sulcii classification. * R#1 has mostly minor technical questions on CNNs - ""an educated guess"" Authors correctly addresses these minor concerns. * R#4 has concerns on awareness of recent state-of-the-art in cnn's use in medical imaging, practical value of the method, evaluation framework Authors acknowledges ""clumsiness"" and mentions the performance of comparable 3D-cnn versions appears equivalent to the proposed approach. Authors genuinely thank R#4 for the provided references. * R#3 raises issues on the training and evaluation setups (comparative framework) Authors acknowledges lack of clarity in the comparative study. * R#2 questions the methodological novelty, has technical questions, and concerns on the evaluation setup with brainvisa. Authors acknowledges lack of clarity, but fail in addressing novelty and explaining their low statistical scores. Conclusion: Reviewers recommendation are reject-reject-strong reject - mostly due to clarity, lack of methodological novelty. All but one are confident on their decisions. Global recommendation towards Rejection """
19,"""Deep Hierarchical Multi-label Classification of Chest X-ray Images""","['hierarchical multi-label classification', 'chest x-ray', 'computer aided diagnosis.']","""Chest X-rays (CXRs) are a crucial and extraordinarily common diagnostic tool, leading to heavy research for computer-aided diagnosis (CAD) solutions. However, both high classification accuracy and meaningful model predictions that respect and incorporate clinical taxonomies are crucial for CAD usability. To this end, we present a deep hierarchical multi-label classification (HMLC) approach for CXR CAD. Different than other hierarchical systems, we show that first training the network to model conditional probability directly and then refining it with unconditional probabilities is key in boosting performance. In addition, we also formulate a numerically stable cross-entropy loss function for unconditional probabilities that provides concrete performance improvements. To the best of our knowledge, we are the first to apply HMLC to medical imaging CAD. We extensively evaluate our approach on detecting 14 abnormality labels from the PLCO dataset, which comprises 198, 000 manually annotated CXRs. We report a mean area under the curve (AUC) of 0.887, the highest yet reported for this dataset. These performance improvements, combined with the inherent usefulness of taxonomic predictions, indicate that our approach represents a useful step forward for CXR CAD.""","""The authors present a deep hierarchical multi-label classification approach to CAD on chest radiographs, and test it on a large public dataset of nearly 200,000 images. The manuscript is well written and organized. The presented approach addresses the situation where the different classes can overlap. In addition, it incorporates both conditional and unconditional probabilities, and demonstrates improved performance when compared to a conventionally accepted alternative approach. In response to the reviewers requests for clarification, the authors have provided additional information to explain their rationale for certain choices in the experimental design. There is an appropriate level of detail here to better understand their work. The reviewers rightly point out that some additional experiments would be important for the authors to truly show that their method is an improvement over existing approaches, and the authors have responded appropriately, and where relevant, have described changes that they will make in their revision to address these comments."""
20,"""Learning joint lesion and tissue segmentation from task-specific hetero-modal data sets""","['joint learning', 'lesion segmentation', 'tissue segmentation', 'hetero-modality', 'weakly-supervision']","""Brain tissue segmentation from multimodal MRI is a key building block of many neuroscience analysis pipelines. It could also play an important role in many clinical imaging scenarios. Established tissue segmentation approaches have however not been developed to cope with large anatomical changes resulting from pathology. The effect of the presence of brain lesions, for example, on their performance is thus currently uncontrolled and practically unpredictable. Contrastingly, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly and is achieving performance levels making it of interest for clinical use. However, few existing approaches allow for jointly segmenting normal tissue and brain lesions. Developing a DNN for such joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on a task-specific hetero-modal imaging protocol. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from task-specific hetero-modal and partially annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper-bound of the risk to deal with missing imaging modalities. For each task, our approach reaches comparable performance than task-specific and fully-supervised models.""","""The authors present a principled approach for multi-task learning from different hetero-modal datasets that are annotated for one specific task and demonstrated its application to joint tissue and lesion segmentation from brain MRI datasets. All reviewers express high enthusiasm for the work and agree that it would be a important contribution to the conference. Pros: - The problem of multi-task learning from task-specific hetero-modal datasets is important to make full use of limited clinical data. - The proposed method is mathematically grounded. - Paper is fairly well-written. - Experimental validation includes the use of challenge data for objective benchmarking of results. Cons: - There are some weaknesses in the experimental evaluation (although many criticisms have been addressed by the authors' comments here). - Long length of the paper and in particular some of the math explanations make it difficult for the reader to digest (although authors have commented that they have reformatted to reduce paper length). """
21,"""End-to-End Image-to-Tree for Vasculature Modeling""","['Vasculature', 'tree extraction', 'retinal vessels', 'diabetic retinopathy', 'visualization']","""Imaging can be used to capture detailed information about complex anatomical structures such as vessel trees. This can help to detect disease such as stenosis (blockages) which is important for diagnosis and clinical decision making. Current approaches for extracting vasculature from images involve generating binary segmentation maps followed by further processing. However, these binary maps may be sub-optimal, implicit representations of the underlying geometry while trees seem a more natural way of describing vasculature. In this work, we propose a novel image-to-tree approach, which is an end-to-end system for extracting explicit tree representations of vasculature from biomedical scans. We designed a moving patch algorithm that utilizes a U-Net component for predicting individual tree nodes. The methodology is presented for both synthetically generated tree images and publicly available Digital Retinal Vessel Extraction dataset (DRIVE). Using vascular tree construction, we discuss applications to thickness estimation in diabetic retinopathy prediction, and explore insights from visualizing these trees.""","""Given the facts that the reviews led to a clear overall recommendation (2 reject, 1 strong reject), and that authors expressed their agreement with many points that were brought up in the reviews, I believe the decision to reject this manuscript from MIDL 2019 is beyond dispute. However, I would like to echo a point that came up repeatedly in the reviews: Generating vessel trees from images is an important challenge in medical image analysis, and approaches to learn this task end-to-end will be a valuable addition to the literature. Therefore, authors should be encouraged to continue their efforts and to resubmit next year, or to another venue."""
22,"""Title""",[],"""abstract""","""The reviewers have all agreed on the value of the state-of-the-art review on action segmentation from kinematic data. Unfortunately, they have also recognized several issues regarding the lack of clarity of the contribution as well as language and presentation problems. """
23,"""XLSor: A Robust and Accurate Lung Segmentor on Chest X-Rays Using Criss-Cross Attention and Customized Radiorealistic Abnormalities Generation""","['Lung segmentation', 'chest X-ray', 'criss-cross attention', 'radiorealistic data augmentation']","""This paper proposes a novel framework for lung segmentation in chest X-rays. It consists of two key contributions, a criss-cross attention based segmentation network and radiorealistic chest X-ray image synthesis (i.e. a synthesized radiograph that appears anatomically realistic) for data augmentation. The criss-cross attention modules capture rich global contextual information in both horizontal and vertical directions for all the pixels thus facilitating accurate lung segmentation. To reduce the manual annotation burden and to train a robust lung segmentor that can be adapted to pathological lungs with hazy lung boundaries, an image-to-image translation module is employed to synthesize radiorealistic abnormal CXRs from the source of normal ones for data augmentation. The lung masks of synthetic abnormal CXRs are propagated from the segmentation results of their normal counterparts, and then serve as pseudo masks for robust segmentor training. In addition, we annotate 100 CXRs with lung masks on a more challenging NIH Chest X-ray dataset containing both posterioranterior and anteroposterior views for evaluation. Extensive experiments validate the robustness and effectiveness of the proposed framework. The code and data can be found from pseudo-url .""","""The manuscript proposes a lung segmentation framework for chest X-rays. The authors combine criss-cross attention networks (speedup benefits) with a multi-model unsupervised translation method (MUNIT) to generate virtual abnormal C-xray datasets using the expert-segmented normal C-xray images. The paper is well written in general. The application is very relevant and the results, promising. The approach is not completely original as it results from the translation of two recently published tools from the pure computer vision field to the medical imaging area. Pros: - The authors address the problem in the scenario where expert segmented datasets for abnormal cases are generally not available. - After being trained with normal and abnormal CXRs, MUNIT can output various generated abnormal CXRs from a given normal CXR and different random style codes. -The method performance is compared with other deep learning methods including the proposed segmentor without criss-cross attention for segmentation. - The authors present an extensive evaluation employing public available datasets, in which the lung lesions are mild, and their own dataset with severe lesions. The obtained results for public available datasets are similar to the state-of-the-art approaches, and particularly better for highly damaged lungs. - Both Criss-cross attention and specially, data augmentation with generative models could be easily extended and useful in similar segmentation problems. The authors promise to clean up the code and release it soon. Cons: - The section 'Criss-Cross Attention Based Network for Lung Segmentation' is quite difficult to understand. The authors will extend the description in the camera ready manuscript to make it more self-contained. - Some of the deformations obtained are not realistic. From the authors point of view, the lung segmentation performance does not degrade much but no number are reported. - The results for severe lesions are better for the proposed model. The overlapping measures reach, in average, the results obtained for mild lesions but, surprisingly, the Average Volume Dissimilarity is much smaller for the difficult cases. Revising the segmentations, the authors found that the mask's contours match the lung boundaries better in the lung damaged dataset than in the public test set but they did not reported a reason for it. """
24,"""Training Deep Networks on Domain Randomized Synthetic X-ray Data for Cardiac Interventions""","['Domain Randomization', 'Imitation Learning', 'Cardiac Registration']","""One of the most significant challenges of using machine learning to create practical clinical applications in medical imaging is the limited availability of training data and accurate annotations. This problem is acute in novel multi-modal image registration applications where complete datasets may not be collected in standard clinical practice, data may be collected at different times and deformation makes perfect annotations impossible. Training machine learning systems on fully synthetic data is becoming increasingly common in the research community. However, transferring to real world applications without compromising performance is highly challenging. Transfer learning methods adapt the training data, learned features, or the trained models to provide higher performance on the target domain. These methods are designed with the available samples, but if the samples used are not representative of the target domain, the method will overfit to the samples and will not generalize. This problem is exacerbated in medical imaging, where data of the target domain is extremely scarse. This paper proposes to use domain randomization (DR) to bridge the reality gap between the training and target domains, requiring no samples of the target domain. DR adds unrealistic perturbations to the training data, such that the target domain becomes just another variation. The effects of DR are demonstrated on a challenging task: 3D/2D cardiac model-to-X-ray registration, trained fully on synthetic data generated from 1711 clinical CT volumes. A thorough qualitative and quantitative evaluation of transfer to clinical data is performed. Results show that without DR training parameters have little in uence on performance on the training domain of digitally reconstructed radiographs, but can cause substantial variation on the target domain (X-rays). DR results in a significantly more consistent transfer to the target domain.""","""The paper shows that increasing the variability in synthetic data (""domain randomization"") can greatly improve results when applying models trained on such synthetic data to heterogeneous (clinical) data - without explicitly adapting to the target data, in the context of 2D/3D registration. All reviewers are enthusiastic about the potential impact. Main issues (clarity, confusion of qualitative versus quantitative results) are solved in the rebuttal. Quantitative results are convincing in my opinion. The proposed approach is very specific to DRR generation. I therefore think it fits best as a poster if the paper is accepted. """
25,"""Unsupervisedly Training GANs for Segmenting Digital Pathology with Automatically Generated Annotations""","['Adversarial Networks', 'Histology', 'Kidney', 'Segmentation', 'Unsupervised']","""Recently, generative adversarial networks exhibited excellent performances in semi-supervised image analysis scenarios. In this paper, we go even further by proposing a fully unsupervised approach for segmentation applications with prior knowledge of the objects shapes. We propose and investigate different strategies to generate simulated label data and perform image-to-image translation between the image and the label domain using an adversarial model. For experimental evaluation, we consider the segmentation of the glomeruli, an application scenario from renal pathology. Experiments provide proof of concept and also confirm that the strategy for creating the simulated label data is of particular relevance considering the stability of GAN trainings.""","""It looks like all reviewers suggest to accept this paper. After review this paper and the reply to critique, I agree with the reviewers to accept this paper. Generally, this paper is well written. I recommend the authors incorporate reviewer's comments (e.g., statistical test) int he final version. """
26,"""Image Synthesis with a Convolutional Capsule Generative Adversarial Network""","['Capsule Network', 'Generative Adversarial Network', 'Neurons', 'Axons', 'Synthetic Data', 'Segmentation', 'Image Synthesis', 'Image-to-Image Translation']","""Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise. ""","""There is a consensus among the reviewers that this is a good and useful work, and I concur with this. This work seems to be the first capsule-based conditional image generation -- it uses a capsule network in a GAN generator. The authors report a better performance than Pix2Pix, with the number of model parameters reduced significantly (by factor 7). This can be quite useful when the training data is limited, as is typical in medical image segmentation problems. The paper is well written, with a good overview of the subject. Furthermore, the authors promised to release the dataset/code for reproducibility. The only negative comment is, perhaps, the limited technical novelty. Essentially, the work is a combination of the SegCaps architecture from [LaLonde and Bagci 2018] and Pix2Pix from [Isola et al. 2017]. Having said that, the paper is a nice practical contribution to MIDL that will definitely make an excellent poster. """
27,"""Cluster Analysis in Latent Space: Identifying Personalized Aortic Valve Prosthesis Shapes using Deep Representations""","['personalized medicine', 'representation learning', 'aortic valve', 'personalized prosthetics', 'unsupervised learning']","""Due to the high inter-patient variability of anatomies, the field of personalized prosthetics gained attention during the last years. One potential application is the aortic valve. Even though its shape is highly patient-specific, state-of-the-art aortic valve prosthesis are not capable of reproducing this individual geometry. An appraoch to reach an economically reasonable personalization would be the identification of typical valve shapes using clustering, such that each patient could be treated with the prosthesis of the type that matches his individual geometry best. However, a cluster analysis directly in image space is not sufficient due to the tough identification of reasonable metrics and the curse of dimensionality. In this work, we propose representation learning to perform the cluster analysis in the latent space, while the evaluation of the identified prosthesis shapes is performed in image space using generative modeling. To this end, we set up a data set of 58 porcine aortic valves and provide a proof-of-concept of our method using convolutional autoencoders. Furthermore, we evaluated the learned representation regarding its reconstruction accuracy, compactness and smoothness. To the best of our knowledge, this work presents the first approach to derive prosthesis shapes data-drivenly using clustering in latent space.""","""The paper proposes to cluster valve shapes after mapping the data to a low-dimensional space using typical auto-encorders. The application of such approach seems to be novel and the authors have properly addressed reviewers' comments. Although the idea of clustering in a low-dimensional space to address the curse of high data dimensionality is not necessarily novel, the application is quite interesting."""
28,"""Sparse Structured Prediction for Semantic Edge Detection in Medical Images""","['sparsity', 'structured prediction', 'edge detection', 'deep learning']","""In medical image analysis most state-of-the-art methods rely on deep neural networks with learned convolutional filters. For pixel-level tasks, e.g. multi-class segmentation, approaches build upon UNet-like encoder-decoder architectures show impressive results. However, at the same time, grid-based models often process images unnecessarily dense introducing large time and memory requirements. Therefore it is still a challenging problem to deploy recent methods in the clinical setting. Evaluating images on only a limited number of locations has the potential to overcome those limitations and may also enable the acquisition of medical images using adaptive sparse sampling, which could substantially reduce scan times and radiation doses. In this work we investigate the problem of semantic edge detection in CT and X-ray images from sparse sampling locations. We propose a deep learning architecture that comprises of two parts: 1) a lightweight fully-convolutional CNN to extract informative sampling points and 2) our novel sparse structured prediction net (SSPNet). The SSPNet processes image patches on a graph generated from the sampled locations and outputs semantic edge activations for each patch which are accumulated in an array via a weighted voting scheme to recover a dense prediction. We conduct several ablation experiments for our network on a dataset consisting of 10 abdominal CT slices from VISCERAL and evaluate its performance against a baseline UNet on the JSRT database of chest x-rays.""","""The paper has received positive feedback for all three reviewers. Some of the concerns raised by the reviewers were addressed in the author rebuttal. I'd suggest the authors applying important aspects of the reviewer comments in the final version of the paper. The paper is on an interesting topic and proposes an interesting technique for edge detection on a semantic level!"""
29,"""Diffeomorphic Autoencoders for LDDMM Atlas Building""","['image registration', 'atlas-building', 'deep learning', 'autoencoder', 'LDDMM']","""In this work, we present an example of the integration of conventional global and diffeomorphic image registration methods with deep learning. Our method employs a form of autoencoder in which the encoder network maps an image to a transformation and the decoder interpolates a deformable template to reconstruct the input. This enables image-based registration to occur simultaneously with training of deep neural networks, as opposed to current sequential optimization methods. We apply this approach to atlas creation, showing that a system that jointly estimates an atlas image while training the registration encoder network results in a high quality atlas despite drastic dimension reduction. In addition, the shared parametrization for deformations offered by the neural network enables training the atlas with stochastic gradient descent using minibatches on a single GPU. We demonstrate this approach using affine transformations and diffeomorphisms in the LDDMM vector momentum geodesic shooting formulation using the OASIS-3 dataset.""","""Reviewers positively highlighted that the authors made their implementation publicly available. All reviewers agree that the work is promising, interesting, and would likely create good discussions. However, they also raised serious concerns about the current state of the presented experiments. In particular, regarding to issues of overfitting and separation of training and testing data. In the submitted manuscript the approach was tested with 25 samples only. While the authors' rebuttal highlights that more experiments with a much larger number of samples (and presumably also with a clear train/test split) are in the works, it appears unclear at this point what the results will show. There is also relatively limited discussion or comparison to related approaches (e.g., VoxelMorph or Quicksilver), though such a discussion could be easily added in a final version of the paper. Based on these concerns and the rebuttal by the authors two reviewers now recommend rejection and only one recommends acceptance. Hence, the work may not be ready for inclusion in MIDL at its current stage and may greatly benefit from a future inclusion and discussion of the currently conducted experiments. """
30,"""Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data""","['Deep Learning', 'unet', 'prostate cancer', 'ground truth', 'segmentation']","""Gleason grading specified in ISUP 2014 is the clinical standard in staging prostate cancer and the most important part of the treatment decision. However, the grading is subjective and suffers from high intra and inter-user variability. To improve the consistency and objectivity in the grading, we introduced glandular tissue WithOut Basal cells (WOB) as the ground truth. The presence of basal cells is the most accepted biomarker for benign glandular tissue and the absence of basal cells is a strong indicator of acinar prostatic adenocarcinoma, the most common form of prostate cancer. Glandular tissue can objectively be assessed as WOB or not WOB by using specific immunostaining for glandular tissue (Cytokeratin 8/18) and for basal cells (Cytokeratin 5/6 + p63). Even more, WOB allowed us to develop a semi-automated data generation pipeline to speed up the tremendously time consuming and expensive process of annotating whole slide images by pathologists. We generated 295 prostatectomy images exhaustively annotated with WOB. Then we used our Deep Learning Framework, which achieved the 2nd best reported score in Camelyon17 Challenge, to train networks for segmenting WOB in needle biopsies. Evaluation of the model on 63 needle biopsies showed promising results which were improved further by finetuning the model on 118 biopsies annotated with WOB, achieving F1-score of 0.80 and Precision-Recall AUC of 0.89 at the pixel-level. Then we compared the performance of the model against 17 biopsies annotated independently by 3 pathologists using only H&E staining. The comparison demonstrated that the model performed on a par with the pathologists. Finally, the model detected and accurately outlined existing WOB areas in two biopsies incorrectly annotated as totally WOB-free biopsies by three pathologists and in one biopsy by two pathologists.""","""Pros: This paper presents an interesting and novel idea of a deep learning (DL) framework to segment potentially cancerous areas in prostate biopsies using semi-automatically generated data. All reviewers agreed on the significance, quality, clarity and alignment with MIDL conference. Cons: Some additional details in the method section would be needed. However, the authors addressed them in their rebuttals and are expected to modify the camera-ready version accordingly. The application of this paper is quite innovative and the proposed method seems to be a direct assistance to the clinical routine. """
31,"""CARE: Class Attention to Regions of Lesion for Classification on Imbalanced Data""","['Attention Mechanism', 'Imbalanced Data', 'Small Samples', 'Skin Lesion', 'Pneumonia Chest X-ray']","""To date, it is still an open and challenging problem for intelligent diagnosis systems to effectively learn from imbalanced data, especially with large samples of common diseases and much smaller samples of rare ones. Inspired by the process of human learning, this paper proposes a novel and effective way to embed attention into the machine learning process, particularly for learning characteristics of rare diseases. This approach does not change architectures of the original CNN classifiers and therefore can directly plug and play for any existing CNN architecture. Comprehensive experiments on a skin lesion dataset and a pneumonia chest X-ray dataset showed that paying attention to lesion regions of rare diseases during learning not only improved the classification performance on rare diseases, but also on the mean class accuracy. ""","""This paper presents a new approach to learning from imbalanced data and validates it using two medical applications: Skin image classification (with multiple disease catetories) and pneumonia detection in chest X-ray images. The approach is based on an additional labeling step for the rare cases (bounding box generation) and explicitly embeds attention into the learning process of a neural network. The latter is achieved by adding a new loss to the classification network that forces the network to attend to the labelled regions of interest. Comprehensive experiments suggest that the approach is effective. The judgement of the paper was mixed but mainly positive (2 accepts, 1 reject). The reviewers agree that the paper addresses a relevant problem with an interesting approach. They also highlight the relevant property that the method is independent of the choice of (deep) classification architecture and thus widely applicable. Finally, they agree that the manuscript is well-written and includes a convincing experimental analysis including a comparison to state-of-the-art competing approaches to addressing class imbalance. The points of criticism raised were: 1. The absence of more examples of one class is compensated by more resource-intensive labeling of that class (R2,R3). While this is an inherent property of the approach, it can be argued that the method has specifically been designed for cases when only up to few hundred samples are available for the rare class, and hence, human labeling effort is acceptable. 2. The usage of the term attention should be reconsidered due to the supervised learning approach (R1,R2). The authors agree and aim to address this point in the revised manuscript. 3. A more thorough discussion of the contribution in the context of the state of the art is required (R2,R3). This will be addressed in the revised version of the manuscript according to the authors rebuttal. 4. Several aspects related to methodology (R2,R3) and experiments (R1-3) require further analysis/clarification, as detailed in the individual reviews. The authors provided point-to-point responses corresponding to these comments. Moreover, they performed an additional experiment in order to show that simply adding more weight to the minority class does not generally increase performance on the rare class and typically downgrades performance on the majority class(es). Due to the general support of the reviewers and the convincing rebuttal, I suggest acceptance of the paper. """
32,"""AnatomyGen: Deep Anatomy Generation From Dense Representation With Applications in Mandible Synthesis""","['Deep generative model', '3D convolutional neural network', 'Shape generation', 'Geometric morphometrics', 'Shape interpolation']","""This work is an effort in human anatomy synthesis using deep models. Here, we introduce a deterministic deep convolutional architecture to generate human anatomies represented as 3D binarized occupancy maps (voxel-grids). The shape generation process is constrained by the 3D coordinates of a small set of landmarks selected on the surface of the anatomy. The proposed learning framework is empirically tested on the mandible bone where it was able to reconstruct the anatomies from landmark coordinates with the average landmark-to-surface error of 1.42 mm. Moreover, the model was able to linearly interpolate in the Z-space and smoothly morph a given 3D anatomy to another. The proposed approach can potentially be used in semi-automated segmentation with manual landmark selection as well as biomechanical modeling. Our main contribution is to demonstrate that deep convolutional architectures can generate high fidelity complex human anatomies from abstract representations.""","""In this work a generative convolutional shape network is proposed that creates 3D voxel segmentations from only anatomical landmarks. The rating of this submission is not straightforward. On the one hand, all reviewers see some merit in the method and interest in the generated 3D shapes with relatively high-fidelity. On the other hand, they all find numerous choices confusing and are not entirely convinced of a realistic application. In addition their is valid criticism that no proper baseline was evaluated. I have to agree that the scope or impact of this method (which only makes sense when a shape has to be created only based on landmarks) could be limited (because in general these landmarks will have to come from some segmentation). I would imagine that face animation, which models a 3D mesh with many more vertices than landmarks would be the most closely related computer graphics area (cf. 3D Shape Regression for Real-time Facial Animation Siggraph 2013). Furthermore, there have been papers on even more ill-posed problems, e.g. estimating 3D shape from 2D landmarks (see 3D Shape Estimation from 2D Landmarks: A Convex Relaxation Approach CVPR 2015). So it is somewhat hard to believe that no better baseline than the mean-shape was employed (after the rebuttal). What would the performance of a simple (regularised) warping algorithm based on known training segmentations (with their landmarks) be? The very relevant work on convolutional autoencoders with latent-spaces that do not coincides with landmarks (e.g. ACNNs) should be discussed in more detail. One could easily imagine an alternative strategy, where a CAE is trained for reconstruction with segmentations using a 64D latent space and a simple fully-connected MLP is used to map the 29x3D landmark positions into that space. Would this lead to superior results? Thus a more thorough evaluation of hyper-parameter and architecture choices would have also been important as mentioned in the reviews. Nevertheless, despite these shortcomings I narrowly tend towards acceptance, because as argued by the final reviewer evaluations: the method is of interest, to some degree novel and the results are visually convincing. Yet the paper is still in a somewhat preliminary stage."""
33,"""Iterative learning to make the most of unlabeled and quickly obtained labeled data in histology""","['Digital pathology', 'convolutional neural networks', 'kidney', 'segmentation', 'weakly supervised.']","""Due to the increasing availability of digital whole slide scanners, the importance of image analysis in the field of digital pathology increased significantly. A major challenge and an equally big opportunity for analyses in this field is given by the wide range of tasks and different histological stains. Although sufficient image data is often available for training, the requirement for corresponding expert annotations inhibits clinical deployment. Thus, there is an urgent need for methods which can be effectively trained with or adapted to a small amount of labeled training data. Here, we propose a method for optimizing the overall trade-off between (low) annotation effort and (high) segmentation accuracy. For this purpose, we propose an approach based on a weakly supervised and an unsupervised learning stage relying on few roughly labeled samples and many unlabeled samples. Although the idea of weakly annotated data is not new, we firstly investigate the applicability to digital pathology in a state-of-the-art machine learning setting.""","""In order to alleviate the challenges of acquiring large quantities of manually annotated digital pathology images, the authors suggest an approach for better utilization of unlabeled or only weakly annotated data. This is an important direction of research in the field of medical imaging and relevant for the conference. Two out of three reviewers recommend accepting the paper with high confidence. One reviewer however recommends rejection. One of the main concerns of the negative review regarding presentation of the paper being confusing and unclear is somewhat contradicted by the other reviewers commenting that the paper is well written and easy to follow. The authors have answered this criticism and most other critical points sufficiently. I am confident that the authors can address the negative comments in a minor revision of the paper."""
34,"""Transfer Learning by Adaptive Merging of Multiple Models""","['Transfer Learning', 'Life-long learning', 'catastrophic forgetting', 'Tumor Segmentation']","""Transfer learning has been an important ingredient of state-of-the-art deep learning models. In particular, it has significant impact when little data is available for the target task, such as in many medical imaging applications. Typically, transfer learning means pre-training the target model on a related task which has sufficient data available. However, often pre-trained models from several related tasks are available, and it would be desirable to transfer their combined knowledge by automatic weighting and merging. For this reason, we propose T-IMM (Transfer Incremental Mode Matching), a method to leverage several pre-trained models, which extends the concept of Incremental Mode Matching from lifelong learning to the transfer learning setting. Our method introduces layer wise mixing ratios, which are learned automatically and fuse multiple pre-trained models before fine-tuning on the new task. We demonstrate the efficacy of our method by the example of brain tumor segmentation in MRI (BRATS 2018 Challange). We show that fusing weights according to our framework, merging two models trained on general brain parcellation can greatly enhance the final model performance for small training sets when compared to standard transfer methods or state-of the art initialization. We further demonstrate that the benefit remains significant even when training on the entire Brats 2018 data set (255 patients).""","""The paper proposes a new approach to enable the simultaneous pretraining on several different tasks. While previous work has focused on pretraining on single tasks or sequential pretraining, the authors of this paper propose fusion of multiple pre-trained models before fine-tuning on the target task. This is achieved with layer-wise mixing ratios that are learned automatically. Validation is performed on the MICCAI 2018 Brats challenge using 3 tasks (i.e. two pretraining tasks). All reviewers agree that the paper is well-written and addresses an important problem with an innovative approach. Although they share criticism related to the limited experimental validation with only one dataset and three tasks, they unanimously support acceptance of the manuscript (three accepts). Several aspects related to methods and experiments require further discussion as detailed in the individual reviews. These should be addressed in the revised version of the manuscript, Due to the unanimous support of the reviewers I suggest acceptance of the paper."""
35,"""Stain-transforming cycle-consistent generative adversarial networks for improved segmentation of renal histopathology""","['Deep learning', 'generative adversarial networks', 'medical imaging', 'stain transformation']","""The performance of deep learning applications in digital histopathology can deteriorate significantly due to staining variations across centers. We employ cycle-consistent generative adversarial networks (cycleGANs) for unpaired image-to-image translation, facilitating between-center stain transformation. We find that modifications to the original cycleGAN architecture make it more suitable for stain transformation, creating artificially stained images of high quality. Specifically, changing the generator model to a smaller U-net-like architecture, adding an identity loss term, increasing the batch size and the learning all led to improved training stability and performance. Furthermore, we propose a method for dealing with tiling artifacts when applying the network on whole slide images (WSIs). We apply our stain transformation method on two datasets of PAS-stained (Periodic Acid-Schiff) renal tissue sections from different centers. We show that stain transformation is beneficial to the performance of cross-center segmentation, raising the Dice coefficient from 0.36 to 0.85 and from 0.45 to 0.73 on the two datasets.""","""The reviewers agree that it is interesting to see how weakly supervised synthesis can better bridge the gap between datasets / centers, compared with plain intensity augmentation. In my opinion, this is the very interesting point that the authors should emphasize in the abstract (i.e., how Dice goes up from .78 to .85), rather than the (in my opinion) lesser contributions of the identity loss or (especially) smoothing the transition across tiles. I also missed a comparison with a simple model, e.g., based on optimizing an intensity mapping combining the transforms illustrated in Figure 1, in order to minimize the histogram distance with the source domain. I also encourage the authors to incorportate the reviewers' comments, to further increase the quality of the manuscript."""
36,"""Dynamic Pacemaker Artifact Removal (DyPAR) from CT Data using CNNs""","['Cardiac CT', 'Metal Artifact Reduction', 'Convolutional Neural Networks']","""Metal objects in the human heart like implanted pacemakers frequently occur in elderly patients. Due to cardiac motion, they are not static during the CT acquisition and lead to heavy artifacts in reconstructed CT image volumes. Furthermore, cardiac motion precludes the application of standard metal artifact reduction methods which assume that the object does not move. We propose a deep-learning-based approach for dynamic pacemaker artifact removal which deals with metal shadow segmentation directly in the projection domain. The data required for supervised learning is generated by introducing synthetic pacemaker leads into 14 clinical data sets without pacemakers. CNNs achieve a Dice coefficient of 0.913 on test data with synthetic metal leads. Application of the trained CNNs on eight data sets with real pacemakers and subsequent inpainting of the post-processed segmentation masks leads to significantly reduced metal artifacts in the reconstructed CT image volumes.""","""This paper focuses on a very useful medical application; metal artifacts removal in CT imaging in the presence of cardiac motion. To handle such artifacts, authors present an adapted CNN-based segmentation with an imaging modality-specific data generation process to mitigate the scarcity of labeled data. Based on the reviewers' comments and the respective authors' responses, the below summarizes identified key strengths and weaknesses, which should be addressed in the camera ready, along with recommendations for future work. Strengths: - The paper addresses the perturbation of metal shadow that is due to cardiac motion via performing the segmentation and subsequent impainting in the projection domain without relying on an initial forward projection (as the case with the existing standard method that assumes static objects). - The alleviation of labeled data by introducing synthetic leads to real data without pacemakers; this mixed-reality approach can be generalized to different image-to-image tasks when it is almost impossible to obtain annotated data. - The evaluation of clinical datasets with pacemaker leads is intriguing provided that the network was trained on synthetic leads. Weaknesses (to be addressed in the camera ready): - The paper is well-written, yet there are missing technical aspects that might confuse the reader and would hinder reproducing results. Examples are: (1) the exact definition a ""data set""; (2) the distinction between test data with and without real pacemakers; (3) details about how the fully convolutional network is exploited to segment a whole projection view in a single step; (4) discussion and insights for the network architecture and hyperparameters tuning (in particular important hyperparameters that affect performance). - Lack of experiments on real data without pacemakers, real data with other metals (e.g., stents), and data with calcifications. Authors performed experiments on two additional clinical cases during the rebuttal period, which are recommended to be added to the camera ready to demonstrate the utility of the proposed method, especially for cases that would induce false positives. - While the standard MAR approach is not suitable for pacemaker artifact removal in case of cardiac motion, a comparison with the standard MAR approach is still legitimate. This will further demonstrate the utility of the proposed approach. Recommendations for future work: - More comprehensive experiments on contrast versus non-contract enhanced scans. - Comparison to expert-labeled data for real data with pacemakers. - Extending the data generation process to other modalities. - An end-to-end DyPAR where the segmentation task and the impainting task are jointly learned to leverage each other, rather than having a two-stage process."""
37,"""Deep Reinforcement Learning for Subpixel Neural Tracking""","['tracking', 'tracing', 'neuron', 'axon', 'reinforcement learning', 'transfer learning']","""Automatically tracing elongated structures, such as axons and blood vessels, is a challenging problem in the field of biomedical imaging, but one with many downstream applications. Real, labelled data is sparse, and existing algorithms either lack robustness to different datasets, or otherwise require significant manual tuning. Here, we instead learn a tracking algorithm in a synthetic environment, and apply it to tracing axons. To do so, we formulate tracking as a reinforcement learning problem, and apply deep reinforcement learning techniques with a continuous action space to learn how to track at the subpixel level. We train our model on simple synthetic data and test it on mouse cortical two-photon microscopy images. Despite the domain gap, our model approaches the performance of a heavily engineered tracker from a standard analysis suite for neuronal microscopy. We show that fine-tuning on real data improves performance, allowing better transfer when real labelled data is available. Finally, we demonstrate that our model's uncertainty measurea feature lacking in hand-engineered trackerscorresponds with how well it tracks the structure.""","""This paper investigates an interesting medical imaging understanding task; tracing thin structures in biomedical images. To mitigate the scarcity of labeled data, the tracing problem is formulated as a reinforcement learning problem where the agent learns to make an optimal sequence of decisions (or movements in the image space) to define the trace of the structure(s) of interest given a starting point per trace. This formulation makes ""train on synthetic data then test on real data"" possible. CNNs were used to avoid hand-engineered tracers and the entropy of the agent state was used as a measure of uncertainty (which is lacking in traditional methods). Based on the reviewers' comments and the respective authors' responses, the below summarizes identified key weaknesses that should be addressed in the camera ready, along with recommendations for future work. - More details about the quality of the synthetic data, e.g., how close its statistics are to the real data, and the reasoning behind the very small validation set (authors responses for the size of the validation set is not convincing, how large or small a validation set is should be related to how complex the distribution of the synthetic data rather than relying on the training and validation sets drawn from the same generative process.) - To be consistent with the main claim of the paper ""... different biomedical image datasets"", the evaluation needs to include other data sets to demonstrate the generality of the proposed approach. With the limited availability of the labeled data, qualitative results could be considered. - The related work section should cover recent reinforcement learning based methods for images analysis tasks (e.g. landmark detection) and pure segmentation strategies for thin structures. - Paper lacks some technical details including how the subvoxel accuracy is obtained and the intuition behind the heuristics considering in defining the reward function. - Comparisons with APP2 should be added. """
38,"""Group-Attention Single-Shot Detector (GA-SSD): Finding Pulmonary Nodules in Large-Scale CT Images""","['Lung Nodule Detection', 'Single Shot Detector', 'Attention Network', 'Group Convolution']","""Early diagnosis of pulmonary nodules (PNs) can improve the survival rate of patients and yet is a challenging task for radiologists due to the image noise and artifacts in computed tomography (CT) images. In this paper, we propose a novel and effective abnormality detector implementing the attention mechanism and group convolution on 3D single-shot detector (SSD) called group-attention SSD (GA-SSD). We find that group convolution is effective in extracting rich context information between continuous slices, and attention network can learn the target features automatically. We collected a large-scale dataset that contained 4146 CT scans with annotations of varying types and sizes of PNs (even PNs smaller than 3mm). To the best of our knowledge, this dataset is the largest cohort with relatively complete annotations for PNs detection. Extensive experimental results show that the proposed group-attention SSD outperforms the conventional SSD framework as well as the state-of-the-art 3DCNN, especially on some challenging lesion types.""","""The reviewers agree on the novelty of group attention module and its suitability for the given task of pulmonary nodule detection. Additionally, they highlight the scale and diversity of the CT dataset used in training/evaluation of the proposed model. It's shame to see that the manuscript does not mention or discuss any of the previous work produced by the medical image analysis community, which utilised different sorts of attention modules in classification, localisation, and segmentation tasks. I recommend the authors to review the existing literature on attention modelling more carefully. """
39,"""Hybrid Rotation Invariant Networks for small sample size Deep Learning""","['rotational invariance', 'regularization', 'colorectal cancer', 'pancreatic cancer', 'liver lesion segmentation']","""Medical image analysis using deep learning has become a topic of steadily growing interest. While model capacity is continiously increasing, limited data is still a major issue for deep learning in medical imaging. Virtually all past approaches work with a high amount of regularization as well as systematic data augmentation. In explorative tasks realistic data augmentation with affine transformations may not always be possible, which prevents models from effective generalization. Within this paper, we propose inherently rotationally invariant convolutional layers enabling the model to develop invariant features from limited training data. Our approach outperforms classical convolutions on the CIFAR-10, CIFAR-100, and STL-10 datasets. We show the transferability to clinical scenarios by applying our approach on oncologic tasks for metastatic colorectal cancer treatment assessment and liver lesion segmentation in pancreatic cancer patients.""","""This paper introduces rotationally invariant convolutional filters, which are better able to learn from limited training data. The reviewers raised a number of concerns with this paper, including missing citations of key work, the lack of clarity in the modeling section, and the overly optimistic discussion in light of the modest performance. The authors provided an equally detailed point-by-point response; however, most of these comments were apologies for their own omissions and promises to add the information in the camera-ready submission. Overall, I believe that the discussion reinforced the fact that, while the rotationally invariant convolutions are an interesting idea, the paper needs some careful refinement before publication. Therefore, I would agree with the two reviewers who advocated rejection to MIDL. """
40,"""3D multirater RCNN for multimodal multi class segmentation of extremely small objects (ESO)""",[],"""Extremely small objects (ESO) have become observable on clinical routine magnetic resonance imaging acquisitions, thanks to a reduction in acquisition time at higher resolution. Despite their small size (usually pseudo-formula 10 voxels per object for an image of more than pseudo-formula voxels), these markers reflect tissue damage and need to be accounted for to investigate the complete phenotype of complex pathological pathways. In addition to their very small size, variability in shape and appearance leads to high labelling variability across human raters, resulting in a very noisy gold standard. Such objects are notably present in the context of cerebral small vessel disease where enlarged perivascular spaces and lacunes, commonly observed in the ageing population, are thought to be associated with acceleration of cognitive decline and risk of dementia onset. In this work, we redesign the RCNN model to scale to 3D data, and to jointly detect and characterise these important markers of age-related neurovascular changes. We also propose training strategies enforcing the detection of extremely small objects, ensuring a tractable and stable training process.""","""This paper presented a straight forward approach to segment small objects using RCNN network architecture. The novelty is moderate and need further improvement in terms of more strict evaluations. The authors have responded to most of the critiques. Paper is generally well written. Therefore, I would suggest to accept this paper in MIDL 2019 as a poster. """
41,"""Learning beamforming in ultrasound imaging""","['Ultrasound Imaging', 'Deep Learning', 'Beamforming']","""Medical ultrasound (US) is a widespread imaging modality owing its popularity to cost efficiency, portability, speed, and lack of harmful ionizing radiation. In this paper, we demonstrate that replacing the traditional ultrasound processing pipeline with a data-driven, learnable counterpart leads to significant improvement in image quality. Moreover, we demonstrate that greater improvement can be achieved through a learning-based design of the transmitted beam patterns simultaneously with learning an image reconstruction pipeline. We evaluate our method on an in-vivo first-harmonic cardiac ultrasound dataset acquired from volunteers and demonstrate the significance of the learned pipeline and transmit beam patterns on the image quality when compared to standard transmit and receive beamformers used in high frame-rate US imaging. We believe that the presented methodology provides a fundamentally different perspective on the classical problem of ultrasound beam pattern design""","""The paper provide an important contribution to the field and majority of the reviewers would like this work to be accepted. The authors provide reasonable justifications for the questions raised by the reviewers. They also mention,in order to provide reproducable research, the code will be released which in turn will imcrease the impact of the work. In the revised version the authors should include all the new information they provide in their rebuttal. """
42,"""SPDA: Superpixel-based Data Augmentation for Biomedical Image Segmentation""","['Superpixel', 'Perception-preserving transformation', 'Data augmentation', 'Biomedical image segmentation']","""In biomedical image segmentation, supervised training of a deep neural network aims to ""teach"" the network to mimic human visual perception that is represented by image-and-label pairs in the training data. Superpixelized (SP) images are visually perceivable to humans, but a conventionally trained deep learning model often performs poorly when working on SP images. To better mimic human visual perception, we think it is desirable for the deep learning model to be able to perceive not only raw images but also SP images. In this paper, we propose a new superpixel-based data augmentation (SPDA) method for training deep learning models for biomedical image segmentation. Our method applies a superpixel generation scheme to all the original training images to generate superpixelized images. The SP images thus obtained are then jointly used with the original training images to train a deep learning model. Our experiments of SPDA on four biomedical image datasets show that SPDA is effective and can consistently improve the performance of state-of-the-art fully convolutional networks for biomedical image segmentation in 2D and 3D images. Additional studies also demonstrate that SPDA can practically reduce the generalization gap.""","""The authors explore a method to augment the training dataset with superpixel-based representations of the input data. The method is well-explained and the set of experiments is properly motivated in the text, and well designed to demonstrate the effectiveness of the proposed approach. While the level of innovation is relatively minor (superpixels have been around from some time now), the authors do compare the performance of adding the SPDA to different network designs, and the results seem promising."""
43,"""Boundary loss for highly unbalanced segmentation""","['Surface loss', 'unbalanced dataset', 'semantic segmentation', 'deep learning']","""Widely used loss functions for convolutional neural network (CNN) segmentation, e.g., Dice or cross-entropy, are based on integrals (summations) over the segmentation regions. Unfortunately, it is quite common in medical image analysis to have highly unbalanced segmentations, where standard losses contain regional terms with values that differ considerably -- typically of several orders of magnitude -- across segmentation classes, which may affect training performance and stability. The purpose of this study is to build a boundary loss, which takes the form of a distance metric on the space of contours (or shapes), not regions. We argue that a boundary loss can mitigate the difficulties of regional losses in the context of highly unbalanced segmentation problems because it uses integrals over the boundary (interface) between regions instead of unbalanced integrals over regions. Furthermore, a boundary loss provides information that is complimentary to regional losses. Unfortunately, it is not straightforward to represent the boundary points corresponding to the regional softmax outputs of a CNN. Our boundary loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations involving contour points. Our boundary loss is the sum of linear functions of the regional softmax probability outputs of the network. Therefore, it can easily be combined with standard regional losses and implemented with any existing deep network architecture for N-D segmentation. Our boundary loss has been validated on two benchmark datasets corresponding to difficult, highly unbalanced segmentation problems: the ischemic stroke lesion (ISLES) and white matter hyperintensities (WMH). Used in conjunction with the region-based generalized Dice loss (GDL), our boundary loss improves performance significantly compared to GDL alone, reaching up to 8% improvement in Dice score and 10% improvement in Hausdorff score. It also yielded a more stable learning process. Our code is publicly available. ""","""This work proposes a new segmentation loss which measures boundary distances via level set functions. In particular, this loss is meant to address segmentation label imbalances. Experimental results on the ISLES and the WMH datasets show improved segmentation performance when combining the proposed loss with the generalized Dice loss. All reviewers agree that this is nice work and that the manuscript is well written. All three reviewers recommend accept. As mentioned by the reviewers, results or a discussion regarding the use of the boundary loss by itself (and not only in combination with the generalized Dice loss) are missing and should be added to a final version. Also, if possible, it would be good to add results on the actual ISLES test set to illustrate how the proposed segmentation approach compares to other approaches for this segmentation task."""
44,"""Physical Attacks in Dermoscopy: An Evaluation of Robustness for clinical Deep-Learning""","['Dermoscopy', 'Vulnerabilities of Deep Learning', 'Adversarial Examples', 'Physical World Attacks', 'Real Clinical Attacks', 'Skin Cancer']","""Deep Learning (DL)-based diagnostic systems are getting approved for usage as fully automatic or secondary opinion products. This development derives from the achievement of expert-level performance by DL across several applications (e.g. dermoscopy and diabetic retinopathy). While recent literature shows their vulnerability to imperceptible digital manipulation of the image data (e.g. through cyberattacks), the performance of medical DL systems under physical world attacks is not yet explored. This problem demands attention if we want to safely translate medical DL research into clinical practice. In this paper, we design the first small-scale prospective evaluation addressing the vulnerability of DL-dermoscopy systems under physical world attacks in absentia of knowledge about the underlying DL-architecture. We publish the entire dataset of collected images as Physical Attacks on Dermoscopy (PADv1) for public use. The evaluation of susceptibility and robustness reveals that such attacks lead to on average 31% accuracy loss across popular DL-architectures. The DL diagnosis is changed by the attack in one of two cases even without any knowledge of the DL method.""","""While two reviewers have suggested to accept the paper, one other has a stronger negative vote. AnonReviewer1 and AnonReviewer2 have voted positively, but their arguments don't point out specific technical details of the method that are novel. They have more points in their ""cons"" than the ""pros"". AnonReviewer2 has raised a serious concern on the motivation of the paper. In my view, I concur with the several of the issues raised by all reviewers and I present my views below. The paper presents an analysis of the robustness of deep neural networks based classifiers in clinical scenarios. The context of the perturbation is the so-called ""physical attacks"". While I consider the analysis of robustness of the network to perturbations in the form of ""correlated noise"" (paintings, in the paper) interesting, the three enumerated motivations provided on Page 1 (2nd paragraph) aren't clear at all. For instance, if a rogue element wanted to break the diagnostic ability, why create the artifacts in this way only ? Simply introducing any atypical object in the scene (a piece of paper, cloth, finger, pen, etc.), will cause the system to fail. Why worry about painting black dots that look like lesions or painted red dots that look like blood ? Second, it is very easy for the clinical expert to check if the image is tampered in the ways presented in the paper. So, there is a good way to make the CAD system robust to attacks. Why study the robustness of the networks in the specific setting of physical attacks, and not in the generic context of the quality (generalizability, regularization) of the mapping ? This is my main concern with the paper. This is echoed by Reviewer 1 as well. The theme of the Methods section ""We place small artifacts such as dots and lines in a region around the skin lesion and evaluate, whether the DL diagnosis changes"" doesn't seem comprehensive because it restricts to very specific kinds of perturbations that don't seem to be useful to study in practice; certainly no more useful that studying the general problem of reliability of networks. The size of the proposed dataset PADv1 is too small for evaluation to be useful. A good dataset will have at least a few thousand images. This issue has also been raised by a reviewer. The paper only throws light on a problem (which isn't new in the general context of robustness of neural nets), but offers no solution."""
45,"""MRI k-Space Motion Artefact Augmentation: Model Robustness and Task-Specific Uncertainty""","['MRI', 'deep learning', 'CNNs', 'motion artefacts', 'augmentation']","""Patient movement during the acquisition of magnetic resonance images (MRI) can cause unwanted image artefacts. These artefacts may affect the quality of diagnosis by clinicians and cause errors in automated image analysis. In this work, we present a method for generating realistic motion artefacts from artefact-free data to be used in deep learning frameworks to increase training appearance variability and ultimately make machine learning algorithms such as convolutional neural networks (CNNs) robust to the presence of motion artefacts. We model patient movement as a sequence of randomly-generated, demeaned, rigid 3D affine transforms which, by resampling artefact-free volumes, are then combined in k-space to generate realistic motion artefacts. We show that by augmenting the training of semantic segmentation CNNs with artefacted data, we can train models that generalise better and perform more reliably in the presence of artefacted data, with negligible cost to their performance on artefact-free data. We show that the performance of models trained using artefacted data on segmentation tasks on real-world test-retest image pairs is more robust. Finally, we demonstrate that measures of uncertainty obtained from motion augmented models reflect the presence of artefacts and can thus provide relevant information to ensure the safe usage of deep-learning extracted biomarkers.""","""This paper presents a motion artifact-robust CNN segmentation algorithm. Instead of trying to reconstruct a un-corrupted image from the corrupted one (or its k-space), which is the common approach and may destroy important image information, here, the authors make the CNN model robust to the presence of such artifacts. The problem is interesting to the community. The authors augment the training set with simulated data with motion artifacts. The results generally show improved performance on simulated and real-world images with motion artifacts. The authors also show that this kind of augmentation improves drop-out based uncertainty estimation. All reviewers agree that paper is well written and the experiments are extensive and convincing. """
46,"""Unsupervised Lesion Detection via Image Restoration with a Normative Prior""",[],"""While human experts excel in and rely on identifying an abnormal structure when assessing a medical scan, without necessarily specifying the type, current unsupervised abnormality detection methods are far from being practical. Recently proposed deep-learning (DL) based methods were initial attempts showing the capabilities of this approach. In this work, we propose an outlier detection method combining image restoration with unsupervised learning based on DL. A normal anatomy prior is learned by training a Gaussian Mixture Variational Auto-Encoder (GMVAE) on images from healthy individuals. This prior is then used in a Maximum-A-Posteriori (MAP) restoration model to detect outliers. Abnormal lesions, not represented in the prior, are removed from the images during restoration to satisfy the prior and the difference between original and restored images form the detection of the method. We evaluated the proposed method on Magnetic Resonance Images (MRI) of patients with brain tumors and compared against previous baselines. Experimental results indicate that the method is capable of detecting lesions in the brain and achieves improvement over the current state of the art.""","""The discussion period and authors' efforts to clarify questions have led to a full acceptance agreement among the three reviewers. The paper contribution builds on prior work but proposes new technical elements have clear value. The experimental validation of the unsupervised method on the common BRATS public database will set a precedent in the community."""
47,"""VOCA: Cell Nuclei Detection In Histopathology Images By Vector Oriented Confidence Accumulation""",[],"""Cell nuclei detection is the basis for many tasks in Computational Pathology ranging from cancer diagnosis to survival analysis. It is a challenging task due to the significant inter/intra-class variation of cellular morphology. The problem is aggravated by the need for additional accurate localization of the nuclei for downstream applications. Most of the existing methods regress the probability of each pixel being a nuclei centroid, while relying on post-processing to implicitly infer the rough location of nuclei centers. To solve this problem we propose a novel multi-task learning framework called vector oriented confidence accumulation (VOCA) based on deep convolutional encoder-decoder. The model learns a confidence score, localization vector and weight of contribution for each pixel. The three tasks are trained concurrently and the confidence of pixels are accumulated according to the localization vectors in detection stage to generate a sparse map that describes accurate and precise cell locations. A detailed comparison to the state-of-the-art based on a publicly available colorectal cancer dataset showed superior detection performance and significantly higher localization accuracy.""","""The reviewers seem to agree that there is value in the multi-task formulation explored in this paper, particularly with the pseudo-formula map. I do agree with Reviewer 1 that the authors should discuss the fact that their improvement over the competing methods is marginal (which is fine), rather than stating that the improvement is considerable. I also encourage the authors to incorporate the many informative comments from the reviewers in the final version."""
48,"""Spherical CNN-Based Brain Tissue Classification Using Diffusion MRI""","['Brain tissue classification', 'diffusion MRI (dMRI)', 'Spherical CNN (SCNN)', 'Fiber Orientation Distribution Function (fODF)']","""We propose a method for classification of brain tissue using diffusion MRI data. First, a fiber orientation distribution function (fODF) is constructed at each voxel using Constrained Spherical Deconvolution (CSD) algorithm. Then, instead of secondary properties of reconstructed fODFs, because fODFs live on the sphere, we propose to classify each voxel using Spherical CNN (SCNN) without any transformation into other spaces such as a planar space. Our approach does not require a large number of subjects in contrast to streamline CNN based methods for structural MR image labeling. We present results on a dataset taken from HCP database to demonstrate that our method is suitable to the nature of diffusion data and furthermore it shows transfer capability among subjects.""","""Paper 27 - Rejection, due to practical limitations and issues on comparison This paper proposes to use spherical CNNs on diffusion data for brain tissue classification. * R#2 highlights novelty of an so(3) framework but mentions practical limitations, raises issues with comparisons (method and dataset). In a summary, why segmenting on dMRI and not on available structural MRIs. Authors indicates challenges in the availability of structural segmentations, and reminds scenarios may arise where structural images may be difficult to segment. * R#1 finds the method original but has concerns on its evaluation (comparison with other method, used dataset) Authors reminds the novelty is to enable learning of diffusion data in a correct SO(3) domain. R#1 reminds that if an SO(3) framework is proposed, its improvement over a standard Euclidean framework should be evaluated. * R#3 highlights the methodological novelty on learning diffusion data in SO(3), but has major concerns on methodology (input, architecture) and evaluation (lack of comparison). Authors clarify misunderstandings from the reviewer, and refers the reviewer to the original Spherical CNN paper. As a side node, Spherical CNNs were proposed in recent ICML and ICLR papers. The author contribution is on their use on diffusion imaging. Conclusion: Reviewers recommandation are reject-reject-strong reject - mostly due to a practical limitation (why segmenting on diffusion while structure is available) and lack of comparison (what is the improvement over a Euclidean version). All are absolutely certain of their decisions. Global recommendation towards Rejection - but may be considered as a borderline paper depending on paper quotas. Reviewers concern on a missing comparative study is, however, genuine and does require major changes in this submission. """
49,"""Collaborative slide screening for the diagnosis of breast cancer metastases in lymph nodes""","['fully convolutional network', 'few-shot learning', 'meta-learning', 'sparse annotation', 'lymph nodes', 'camelyon16', 'histopathological images']","""In this paper we assess the viability of applying a few-shot algorithm to the segmentation of Whole Slide Images (WSI) for human histopathology. Our ultimate goal is to design a deep network that could screen large sets of WSIs of sentinel lymph-nodes by segmenting out areas with possible lesions. Such network should also be able to modify its behavior from a limited set of examples, so that a pathologist could tune its output to specific diagnostic pipelines and clinical practices. In contrast, 'classical' supervised techniques have found limited applicability in this respect, since their output cannot be adapted unless through extensive retraining. The novel approach to the task of segmenting biological images presented here is based on guided networks, which can segment a query image by integrating a support set of sparsely annotated images which can also be extended at run time. In this work, we compare the segmentation performances obtained with guided networks to those obtained with a Fully Convolutional Network, based on fully supervised training. Comparative experiments were conducted on the public Camelyon16 dataset; our preliminary results are encouraging and show that the network architecture proposed is competitive for the task described.""","""There is a consensus among Reviewers about the recommendation for this paper that I support. The authors tried to explain their decisions, and it is appreciated but it is not sufficient for the acceptance of the paper."""
50,"""Learning interpretable multi-modal features for alignment with supervised iterative descent""","['Multi-Modal Features', 'Image Registration', 'Machine Learning']","""Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In contrast to most previous approaches, our model does not require full deformation fields as supervision but rather only small incremental descent targets generated from organ labels during training. By mapping the complex appearance to a common feature space in which update steps of a first-order Taylor approximation (akin to a regularised Demons iteration) match the supervised descent direction, we can train a CNN-model that learns interpretable modality invariant features. Our experimental results demonstrate that these features can be plugged into conventional iterative optimisers and are more robust than state-of-the-art hand-crafted features for aligning MRI and CT images.""","""This paper develops a cross-modality registration framework that decouples the feature learning and deformation estimation steps. All three reviewers agree that the paper presents a unique and interesting idea with application to medical imaging datasets. There was some concern that the method and results were preliminary. The authors did a good job of responding to the reviewer concerns by running additional experiments and clarifying the statements in their paper. They also plan to incorporate the reviewer comments into their final submission. I would recommend this paper for acceptance to MIDL. Based on the comment by Reviewer 1, I would also suggest that this paper be accepted for an oral presentation. """
51,"""Weakly Supervised Deep Nuclei Segmentation using Points Annotation in Histopathology Images""","['Nuclei segmentation', 'weak supervision', 'deep learning', 'Voronoi diagram', 'conditional random \x0cfield']","""Nuclei segmentation is a fundamental task in histopathological image analysis. Typically, such segmentation tasks require significant effort to manually generate pixel-wise annotations for fully supervised training. To alleviate the manual effort, in this paper we propose a novel approach using points only annotation. Two types of coarse labels with complementary information are derived from the points annotation, and are then utilized to train a deep neural network. The fully-connected conditional random field loss is utilized to further refine the model without introducing extra computational complexity during inference. Experimental results on two nuclei segmentation datasets reveal that the proposed method is able to achieve competitive performance compared to the fully supervised counterpart and the state-of-the art methods while requiring significantly less annotation effort. Our code is publicly available.""","""This paper presents a straightforward and novel solution for nuclei segmentation by incorporating point annotations into CNN. The method is clearly described and the experimental results look reasonable. The authors have thoroughly replied to reviewers' questions and also provided additional experimental results thus addressing the critiques. """
52,"""Joint Learning of Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets""","['Brain image segmentation', 'multi-task learning', 'heterogeneous datasets', 'convolutional neural networks']","""Brain lesion and anatomy segmentation in magnetic resonance images are fundamental tasks in neuroimaging research and clinical practise. Given enough training data, convolutional neuronal networks (CNN) proved to outperform all existent techniques in both tasks independently. However, to date, little work has been done regarding simultaneous learning of brain lesion and anatomy segmentation from disjoint datasets. In this work we focus on training a single CNN model to predict brain tissue and lesion segmentations using heterogeneous datasets labeled independently, according to only one of these tasks (a common scenario when using publicly available datasets). We show that label contradiction issues can arise in this case, and propose a novel adaptive cross entropy (ACE) loss function that makes such training possible. We provide quantitative evaluation in two different scenarios, benchmarking the proposed method in comparison with a multi-network approach. Our experiments suggest ACE loss enables training of single models when standard cross entropy and Dice loss functions tend to fail. Moreover, we show that it is possible to achieve competitive results when comparing with multiple networks trained for independent tasks.""","""This paper introduces an adaptive cross entropy loss function to segment brain lesions from diverse datasets. There was some concern about the overall novelty of the paper; however, the authors did a good job addressing this point in their response. Two of three reviewers have recommended the paper for acceptance to MIDL. I concur with this recommendation. """
53,"""Learning from sparsely annotated data for semantic segmentation in histopathology images""","['Semantic segmentation', 'Loss balancing', 'Partially labelled data', 'Weak supervision', 'Class imbalance']","""We investigate the problem of building convolutional networks for semantic segmentation in histopathology images when weak supervision in the form of sparse manual annotations is provided in the training set. We propose to address this problem by modifying the loss function in order to balance the contribution of each pixel of the input data. We introduce and compare two approaches of loss balancing when sparse annotations are provided, namely (1) instance based balancing and (2) mini-batch based balancing. We also consider a scenario of full supervision in the form of dense annotations, and compare the performance of using either sparse or dense annotations with the proposed balancing schemes. Finally, we show that using a bulk of sparse annotations and a small fraction of dense annotations allows to achieve performance comparable to full supervision.""","""The authors propose a method for training models for semantic segmentation from sparsely annotated whole-slide histopathology images. To this end, the authors propose dedicated loss functions to learn from sparsely annotated data. Two out of three reviewers recommend acceptance of the paper. Both comment on the clearly described methods and observations of the paper. The reviewer recommending rejection is mainly concerned about the experimental weaknesses of the paper, stating that only 5 test images have been used, causing instable results. This comment has been rebutted by the authors. Because of the large size of whole-slide images, they can provide enough data to train deep models even if they just stem from 5 patients. The total number of cropped whole-slide image regions used in the paper is 49 regions of interest. Given that these are large slide images with much variation, from which 1,250 non overlapping tiles were used as the test set, the actual size actual size of the test set seems large enough to make some valid observations given the presented experiments. """
54,"""Exclusive Independent Probability Estimation using Deep 3D Fully Convolutional DenseNets: Application to IsoIntense Infant Brain MRI Segmentation""","['Deep learning', 'Convolutional Neural Network', 'FC-DenseNet', 'Segmentation']","""The most recent fast and accurate image segmentation methods are built upon fully convolutional deep neural networks. In particular, densely connected convolutional neural networks (DenseNets) have shown excellent performance in detection and segmentation tasks. In this paper, we propose new deep learning strategies for DenseNets to improve segmenting images with subtle differences in intensity values and features. In particular, we aim to segment brain tissue on infant brain MRI at about 6 months of age where white matter and gray matter of the developing brain show similar T1 and T2 relaxation times,thus appear to have similar intensity values on both T1- and T2-weighted MRI scans. Brain tissue segmentation at this age is, therefore, very challenging. To this end, we propose an exclusive multi-label training strategy to segment the mutually exclusive brain tissues with similarity loss functions that automatically balance the training based on class prevalence. Using our proposed training strategy based on similarity loss functions and patch prediction fusion we decrease the number of parameters in the network, reduce the complexity of the training process focusing the attention on less number of tasks, while mitigating the effects of data imbalance between labels and inaccuracies near patch borders. By taking advantage of these strategies we were able to perform fast image segmentation (less than 90 seconds per 3D volume), using a network with less parameters than many state-of-the-artnetworks (1.4 million parameters), overcoming issues such as 3D vs 2D training and large vs small patch size selection, while achieving the top performance in segmenting brain tissue among all methods tested in first and second round submissions of the isointense infant brain MRI segmentation (iSeg) challenge according to the official challenge test results. Our proposed strategy improves the training process through balanced training and by reducing its complexity while providing a trained model that works for any size input image, and is fast and more accurate than many state-of-the-art methods.""","""The paper proposes an exclusive multi-label multi-class training approach (through independent probability estimation) using automatically-adjusted similarity loss functions per class for classes that highly overlap in features. This overlap characterizes structural MRI with isointense white and gray matter tissues in 6-month old infants. The authors addressed reviewers' major comments and an extensive comparison with segmentation deep learning architectures was performed including 3D Unet, DenseVoxNet and DenseSeg. """
55,"""DavinciGAN: Unpaired Surgical Instrument Translation for Data Augmentation""",[],"""Recognizing surgical instruments in surgery videos is an essential process to describe surgeries, which can be used for surgery navigation and evaluation systems. In this paper, we argue that an imbalance problem is crucial when we train deep neural networks for recognizing surgical instruments using the training data collected from surgery videos since surgical instruments are not uniformly shown in a video. To address the problem, we use a generative adversarial network (GAN)-based approach to supplement insufficient training data. Using this approach, we could make training data have the balanced number of images for each class. However, conventional GANs such as CycleGAN and DiscoGAN, have a potential problem to be degraded in generating surgery images, and they are not effective to increase the accuracy of the surgical instrument recognition under our experimental settings. For this reason, we propose a novel GAN framework referred to as DavinciGAN, and we demonstrate that our method outperforms conventional GANs on the surgical instrument recognition task with generated training samples to complement the unbalanced distribution of human-labeled data.""","""This paper applies GAN onto surgical tool recognition which belongs to an interesting and increasingly important area of CAI. The major concern from reviewers is marginal novelty. I can see the method design is sound, with obtaining nice visual results of generated images. The writing is clear and easy to read. I would suggest the authors address the raised issues from reviewers' comments in the final version."""
56,"""High-quality segmentation of low quality cardiac MR images using k-space artefact correction""","['Cardiac MR Segmentation', 'Image Quality', 'Image Artefacts', 'Image Artefact Correction', 'Deep Learning', 'UK Biobank', 'Automap']","""Deep learning methods have shown great success in segmenting the anatomical and pathological structures in medical images. This success is closely bounded with the quality of the images in the dataset that are being segmented. A commonly overlooked issue in the medical image analysis community is the vast amount of clinical images that have severe image artefacts. In this paper, we discuss the implications of image artefacts on cardiac MR segmentation and compare a variety of approaches for motion artefact correction with our proposed method Automap-GAN. Our method is based on the recently developed Automap reconstruction method, which directly reconstructs high quality MR images from k-space using deep learning. We propose to use a loss function that combines mean square error with structural similarity index to robustly segment poor-quality images. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted CMR k-space data and uncorrected reconstructed images. In the experiments, we apply the proposed method to correct for motion artefacts on a large dataset of 1,400 subjects to improve image quality. The improvement of image quality is quantitatively assessed using segmentation accuracy as a metric. The segmentation is improved from 0.63 to 0.72 dice overlap after artefact correction. We quantitatively compare our method with a variety of techniques for recovering image quality to showcase the influence on segmentation. In addition, we qualitatively evaluate the proposed technique using k-space data containing real motion artefacts.""","""Quality: Reviewers were impressed with the number of experiments and comparisons to other techniques. Some minor issues with missing error bars were addressed during the rebuttal phase. Clarity: Reviewers agree that the paper is clearly written. Some minor missing details and clarifications were added after they were pointed out during the review. Originality: The primary originality of the proposal lies in the assessment of the effect on segmentation, rather than image quality. Significance: The strength of the work is that it could improve segmentation, which is an important processing step for a number of further analyses. The weakness is that it does not necessarily provide image quality at the level clinicians expect, making it harder for them to accept the resulting segmentation. """
57,"""Fusing Unsupervised and Supervised Deep Learning for White Matter Lesion Segmentation""","['Deep Learning', 'Anomaly Detection', 'Supervised', 'Unsupervised', 'Semi-Supervised', 'White Matter Lesion Segmentation', 'Multiple Sclerosis']","""Unsupervised Deep Learning for Medical Image Analysis is increasingly gaining attention, since it relieves from the need for annotating training data. Recently, deep generative models and representation learning have lead to new, exciting ways for unsupervised detection and delineation of biomarkers in medical images, such as lesions in brain MR. Yet, Supervised Deep Learning methods usually still perform better in these tasks, due to an optimization for explicit objectives. We aim to combine the advantages of both worlds into a novel framework for learning from both labeled unlabeled data, and validate our method on the challenging task of White Matter lesion segmentation in brain MR images. The proposed framework relies on modeling normality with deep representation learning for Unsupervised Anomaly Detection, which in turn provides optimization targets for training a supervised segmentation model from unlabeled data. In our experiments we successfully use the method in a Semi-supervised setting for tackling domain shift, a well known problem in MR image analysis, showing dramatically improved generalization. Additionally, our experiments reveal that in a completely Unsupervised setting, the proposed pipeline even outperforms the Deep Learning driven anomaly detection that provides the optimization targets.""","""The paper presents an approach for white matter lesion segmentation by feeding results from unsupervised anomaly detection to train a supervised segmentation model. All reviewers are in agreement that with a few added details to the paper, the work would make a solid contribution to the conference. Pros: - New work in unsupervised and semi-supervised learning is an important topic for medical image analysis problems with limited labeled training data. - Paper is written clearly and easy to follow. - Experimental validation of methods is solid, utilizing a few datasets and experimental settings including a domain adaptation experiment. Cons: - Some details are missing, such as explicit illustration of the UAD model and hyperparameter weights. - Some notation and presentation of results in tables were confusing. - Some additional proof of the claims of the benefit of the GDL term would be useful."""
58,"""Semantic segmentation of cell nuclei and cytoplasms in microscopy images""","['Fully convolutional neural networks', 'semantic segmentation', 'deep learning', 'microscopy imaging', 'fluorescent imaging']","""Microscopy imaging of cell nuclei and cytoplasms is a powerfull technique for research, diagnosis and drug discovery. However, the use of fluorescent microscopy imaging for cell nuclei and cytoplasms labeling is time consuming and inconvenient for several reasons,thus there is a lack of fast and accurate methods for prediction of fluorescence cell nuclei and cytoplasms from bright-field microscopy imaging. We present a method for labeling bright-field images using convolutional neural networks. We investigate different convolutional neural network architectures for cell nuclei and cytoplasms prediction. Using the DeepLabv3+, we found relative impressive results with a 5-fold cross validation dice coefficient equal to 0.9503 as well as meaningful segmentation maps. This work shows proof of concept regarding microscopy fluorescence labeling of cell nuclei and cytoplasms using bright-field images""","""While the results look visually (and numerically) impressive, the three reviewers agree (and so do I) that this article does not go beyond applying existing techniques to a particular dataset, which limited comparison with other methods / datasets. """
59,"""On the Spatial and Temporal Influence for the Reconstruction of Magnetic Resonance Fingerprinting""","['Magnetic Resonance Fingerprinting', 'Image Reconstruction', 'Convolutional Neural Network', 'Quantitative Magnetic Resonance Imaging', 'Neuromuscular Diseases']","""Magnetic resonance fingerprinting (MRF) is a promising tool for fast and multiparametric quantitative MR imaging. A drawback of MRF, however, is that the reconstruction of the MR maps is computationally demanding and lacks scalability. Several works have been proposed to improve the reconstruction of MRF by deep learning methods. Unfortunately, such methods have never been evaluated on an extensive clinical data set, and there exists no consensus on whether a fingerprint-wise or spatiotemporal reconstruction is favorable. Therefore, we propose a convolutional neural network (CNN) that reconstructs MR maps from MRF-WF, a MRF sequence for neuromuscular diseases. We evaluated the CNN's performance on a large and highly heterogeneous data set consisting of 95 patients with various neuromuscular diseases. We empirically show the benefit of using the information of neighboring fingerprints and visualize, via occlusion experiments, the importance of temporal frames for the reconstruction.""","""The authors address the task of image reconstruction for magnetic resonance fingerprinting (MRF), presenting a study with almost 100 cases. The reviewers comment positively on the novelty of MRF (and, hence, the related spatio-temporal reconstruction task), the size of the data set, and the thoroughness of the evaluation, while criticizing technical specificities of the proposed reconstruction approach, such as the way how the complex signal is dealt with. They all recommend 'accept' and I would agree with they recommendation. """
|