Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "H1gXZLzxeE", "submission_url": "https://openreview.net/forum?id=H1gXZLzxeE", "submission_content": {"title": "Exploring local rotation invariance in 3D CNNs with steerable filters", "authors": ["Vincent Andrearczyk", "Julien Fageot", "Valentin Oreiller", "Xavier Montet", "Adrien Depeursinge"], "authorids": ["vincent.andrearczyk@hevs.ch", "julien.fageot@epfl.ch", "valentin.oreiller@hevs.ch", "xavier.montet@hcuge.ch", "adrien.depeursinge@hevs.ch"], "keywords": ["Local rotation invariance", "convolutional neural network", "steerable filters", "3D texture"], "abstract": "Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. We use trainable 3D steerable filters in CNNs in order to obtain LRI with directional sensitivity, i.e. non-isotropic filters. Pooling across orientation channels after the first convolution layer releases the constraint on finite rotation groups as assumed in several recent works. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations. We only convolve the input volume with a set of Spherical Harmonics (SHs) modulated by trainable radial supports and directly steer the responses, resulting in a drastic reduction of trainable parameters and of convolution operations, as well as avoiding approximations due to interpolation of rotated kernels. The proposed method is evaluated and compared to standard CNNs on 3D texture datasets including synthetic volumes with rotated patterns and pulmonary nodule classification in CT. The results show the importance of LRI in CNNs and the need for a fine rotation sampling.", "pdf": "/pdf/ee1fe7cc3764445fe786cc01df6d124ccac04879.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "andrearczyk|exploring_local_rotation_invariance_in_3d_cnns_with_steerable_filters", "_bibtex": "@inproceedings{andrearczyk:MIDLFull2019a,\ntitle={Exploring local rotation invariance in 3D {\\{}CNN{\\}}s with steerable filters},\nauthor={Andrearczyk, Vincent and Fageot, Julien and Oreiller, Valentin and Montet, Xavier and Depeursinge, Adrien},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=H1gXZLzxeE},\nabstract={Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. We use trainable 3D steerable filters in CNNs in order to obtain LRI with directional sensitivity, i.e. non-isotropic filters. Pooling across orientation channels after the first convolution layer releases the constraint on finite rotation groups as assumed in several recent works. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations. We only convolve the input volume with a set of Spherical Harmonics (SHs) modulated by trainable radial supports and directly steer the responses, resulting in a drastic reduction of trainable parameters and of convolution operations, as well as avoiding approximations due to interpolation of rotated kernels. The proposed method is evaluated and compared to standard CNNs on 3D texture datasets including synthetic volumes with rotated patterns and pulmonary nodule classification in CT. The results show the importance of LRI in CNNs and the need for a fine rotation sampling.},\n}"}, "submission_cdate": 1544721915230, "submission_tcdate": 1544721915230, "submission_tmdate": 1561397516937, "submission_ddate": null, "review_id": ["Byx75tjuQN", "r1l68nx6XE", "rJgpuaqTXV"], "review_url": ["https://openreview.net/forum?id=H1gXZLzxeE¬eId=Byx75tjuQN", "https://openreview.net/forum?id=H1gXZLzxeE¬eId=r1l68nx6XE", "https://openreview.net/forum?id=H1gXZLzxeE¬eId=rJgpuaqTXV"], "review_cdate": [1548429706754, 1548713045351, 1548754292676], "review_tcdate": [1548429706754, 1548713045351, 1548754292676], "review_tmdate": [1548856756765, 1548856694057, 1548856683859], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper77/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper77/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper77/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["H1gXZLzxeE", "H1gXZLzxeE", "H1gXZLzxeE"], "review_content": [{"pros": "The paper describes a very nice approach for dealing with rotation invariant texture/feature detection in convolutional neural networks. From a classical point of view rotation invariant convolution layers can be obtained with filters that are isotropic. This is a rather limiting viewpoint and the current submission nicely extends the class of invariant CNN layers by defining a locally rotation invariant filter by a roto-translation lifting convolution (see work on group convolutions networks) directly followed by a max-pooling over rotations. Such layers are based on non-isotropic filters and can be efficiently implemented by defining convolution kernels in polar coordinates, relying on spherical harmonics.\n\nThe theory is well explained and contains interesting details that are also valuable from a practical point of view. Although not validated on very large datasets (and with minimal architecture design), the experiments are carefully setup and convincingly demonstrate the potential of the proposed way of dealing with rotationally invariant feature/texture descriptors.\n\nMy recommendation is to accept the submission.\n", "cons": "Overall I really enjoyed reading the paper. In this section I provide some minor comments and suggestions. \n\nSmall fixes:\n\nTypo on page 2 \u201cfirst convolution layer, what exploits\u201d -> \u201cfirst convolution layer, that/which exploits\u201d\n\nI found the introduction of 2.4 confusing in the transition from h_i being a 1D function (from \\mathbb{R} to \\mathbb{R}) to its voxelized version (from \\mathbb{R}^3 \\rightarrow \\mathbb{R}) with the mention of isotropy constraint. I don\u2019t see this as a constrained as by definition \\rho = \\lVert \\mathbf{x} \\rVert is isotropic. What I think is important here is that the radial profile extends all the way to the corners of 3D kernel, and this then sets the number of trainable parameters in h_i (with unit voxel distance spacing). Perhaps this can be clarified?\n\nFigure 2 (which nicely illustrates the above) raises the following question: the last weights in the h_i vector only affect the corners of the 3D kernel and therefore some \u201cvoxel features\u201d only appear at diagonal rotations, doesn\u2019t this affect the invariance you aim for? The shape of the kernel is not isotropic and therefore it is not truly rotation invariant. True rotation invariance could be achieved by limiting \\rho leq (c-1)/2 (so the corners are always zero).\n\nI really appreciated the short comment on the angular Nyquist frequency, these things are good to realize when working with the actual code.\n\nPage 6 regarding padding, this is zero padding?\n\nMinor comments:\n\n[Just a remark for a possible interesting extension (intro of section 2)] Regarding the action of SO(3) on R^3 and S^2:, for an example of efficient spherical harmonics implementations of SO(3) acting on axially symmetric functions on S^2 also see citation [2] below.\n\nAfter Eq.(1), you could possibly identify \u201cI*f(R\\cdot)\u201d as a lifting group-convolution (see e.g. Cohen et al. or [5] below), followed by sub-group pooling (max of rotations).\n\nPage 5 after Eq. 6. A nice property of expanding each filter in the same basis is that you can pre-filter the input whit your set of basis-functions separately and then combine the results with the corresponding coefficients to create all feature maps. \nThe second to last paragraph of section 4 sounds contradicting and could be rewritten: \u201cthe improvement\u2026 of SH convolution is limited \u2026 yet a significant increase in accuracy\u201d.\n\nFinally I would like to mention some very related work both on steerable filters and local rotation invariance which could be addressed in the submission.\n\nFor work on (optimal) 3D steerable filters in texture analysis: See e.g. [1] (and references therein) for a recent overview and toolkit for 3D steerable image filtering. See e.g. [2] for (optimal) steerable filter construction/fitting for axially symmetric texture detection in 3D medical image data. In [2] additional axial symmetry is exploited to further reduce the number of SH coefficients and it makes rotation over your \\gamma redundant. This essentially boils down to relying on Fourier transforms/irreducible representations on the sphere S^2 (<> quotient group SO(3)/SO(2)), see e.g. the book by Chirikjian and Kyatkin [3] and [4].\n\nFollowing up on your discussion paragraph on page 8 regarding LRI in relation to the work in the group-CNN context by Cohen et al.: In addition to the already cited work by Weiler et al. 2017 the works described in [5] and [6] are very related to the current proposal in two ways. (1) In these papers the construction of local invariance is also studied by following \u201clifting convolutions\u201d (creating feature maps in a higher-dimensional position-rotation space) by max-pooling over rotations. In these works this has been done with additional group convolution layers in between the lifting and rotation-pooling layers (creating local rotation invariance over the net receptive field size). (2) In [5] and Weiler et al. 2017 the authors describe a similar behavior as observed in Table 1: a higher angular resolution improves performance. A main advantage of your method is the very high efficiency in dealing with trainable parameters, however, it should be noted that neither in [5] nor the method of Weiler et al. the number of trainable parameters increases with angular resolution when only a lifting layer directly followed by a rotation pooling is considered.\n\nThe work by Mallat et al is very much concerned with both local and global rotation invariances in image data (see e.g. the Ph.D. thesis by L. Sifre [7]).\n\n[1] Skibbe, H, and Reisert, M.. \"Spherical tensor algebra: a toolkit for 3d image processing.\" Journal of Mathematical Imaging and Vision 58.3 (2017): 349-381.\n[2] Janssen, M, et al. \"Design and processing of invertible orientation scores of 3D images.\" Journal of Mathematical Imaging and Vision 60.9 (2018): 1427-1458.\n[3] Kyatkin, A, and Chirikjian, G. Engineering applications of noncommutative harmonic analysis: with emphasis on rotation and motion groups. CRC press, 2000.\n[4] Duits, R, et al. \"Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs.\" arXiv preprint arXiv:1811.00363 (2018).\n[5] Bekkers and Lafarge et al. \u201cRoto-Translation Covariant Convolutional Networks for Medical Image Analysis\u201d. In: MICCAI 2018\n[6] Zhou, Yanzhao, et al. \"Oriented response networks.\" Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017.\n[7] Sifre, Laurent, and St\u00e9phane Mallat. Rigid-motion scattering for image classification. Diss. PhD thesis, Ph. D. thesis, 2014.\n", "rating": "4: strong accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}, {"pros": "Summary: \n\nIn this work, a CNN architecture that has both local and global rotation invariances in introduced. Furthering the recent advancements in group CNNs for rotational invariance, the proposed work uses steerable filters based on spherical harmonics to obtain efficient sampling of 3D rotations. The model is evaluated on synthetic data and on lung nodule detection tasks. The performance is shown to be superior with a substantial reduction in the number of parameters when compared to CNNs.\n\nPros: \n\n- Use of steerable filters to avoid approximating filter rotations and to introduce local rotation invariance is a solid contribution\n- Experiments clearly show the importance of introducing local rotation invariance for both the synthetic data and for the lung nodule detection task. That 3D CNN model outperforms in a couple of instances but with almost two orders of magnitude more parameters.\n- The paper is very well written; the discussion section is very insightful. Figure 1 is a great visual abstract of the work.\n\n", "cons": "Minor comments: \n\n- There appears to be a substantial increase in accuracy with increasing M for the synthetic data. A similar trend is also observed for the lung nodule classification data reported in Table 2. However, M = 96 is not reported here. A comment on why this is the case can be useful.\n\n- Literature survey could include one more closely related G-CNN work that also uses max pooling over different rotations, in what the authors call the projection layer [1]\n[1] Bekkers, Erik J., et al. \"Roto-translation covariant convolutional networks for medical image analysis.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018. https://arxiv.org/pdf/1804.03393.pdf", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "Authors tried locally rotation-invariant feature extraction for 3D-texture classification. This locally rotation-invariant feature extraction has essential role for classification of soft and non-rigid medical volumetric data. In the proposed method, authors designed locally rotation-invariant feature extraction by 3D steerable filter convolution and max pooling. This filter convolution is integrated to constitutional neural network (CNN). Using pre-designed kernel, that is, steerable filter, we can dramatically reduce the number of parameters of CNN, which should be learned in training procedure. So, this is computationally efficient even as data driven method. The proposed method sounds technically novel.\n\nIn experiments, authors evaluated their proposed method by using phantom data and real clinical data by comparing with usual 3D CNN architecture. In the evaluation with phantom data, authors clarified the relation among the number of filters, the number of direction and classification results. In the evaluation with real clinical data, they demonstrated the superiority of the proposed method. \n\nManuscript is well structured, and experimental results are convincing. \n", "cons": "Just my comments. \nA figure describing which filters have strong response to input 3D texture, as an example, is welcome for visual interpretation. \n", "rating": "4: strong accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["BkxZVbmu4N", "S1gEY-Q_N4", "rJglwbXd4V"], "comment_cdate": [1549443369475, 1549443451887, 1549443416273], "comment_tcdate": [1549443369475, 1549443451887, 1549443416273], "comment_tmdate": [1555946027519, 1555946027295, 1555946027084], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper77/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper77/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper77/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "No Title", "comment": "Thank you for the great feedback.\nRegarding the figure suggestion, it is difficult and can be counter-intuitive to show that a learned filter responds to a given pattern, particularly in a discrimination task where the filter should only highlight the difference between two patterns. An intuition of the LRI response is provided in Fig. 1. \n"}, {"title": "No Title", "comment": "Thank you for the valuable comments and feedback.\nWe have removed the ambiguous \u201cconstraint of being isotropic\u201d in 2.4 as it is indeed isotropic by definition.\nIt is true that the corner effect may affect the rotation invariance. It is an important point that we will address in the discussion. The invariance is, however already subject to approximation and discretization.\nWe pad with edge values. However, it has very little effect as, for the nodule classification experiment in which we use the padding, the spatial pooling is performed inside the regions of interest which are generally not at the edge of the volume.\nWe have also taken the minor comments into consideration for the final version of the paper. We appreciate the deep understanding and valuable suggestions!\n"}, {"title": "No title", "comment": "Thank you for the great comments. \nIt is indeed relevant to include M=96 for the lung nodule classification experiment and we will add it to the final version. It was originally not included due to time and memory issues with the original implementation (which has been greatly improved) and because we noticed a plateau of the performance with larger values of M on this experiment.\nWe have added the reference to [1] in the introduction. However, we do not think that the projection layer defined in [1] differs from the max pooling over orientation in a classic G-CNN. It does not introduce the local rotation invariance proposed in this paper by pooling after the first convolution layer.\n"}], "comment_replyto": ["rJgpuaqTXV", "Byx75tjuQN", "r1l68nx6XE"], "comment_url": ["https://openreview.net/forum?id=H1gXZLzxeE¬eId=BkxZVbmu4N", "https://openreview.net/forum?id=H1gXZLzxeE¬eId=S1gEY-Q_N4", "https://openreview.net/forum?id=H1gXZLzxeE¬eId=rJglwbXd4V"], "meta_review_cdate": 1551356575751, "meta_review_tcdate": 1551356575751, "meta_review_tmdate": 1551881974688, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "3D CNNs generally involve a large number of parameters, which leads to practical limitations in their applicability. This work demonstrates how the number of parameters can be substantially reduced, while maintaining similar accuracy, by building local rotation invariance, a property that is frequently required in medical image analysis, into the network architecture. I propose to follow the recommendation of the reviewers, who were all quite excited about this approach (three times \"strong accept\"). However, I would welcome it if, in the final version, the authors could address the question to which extent the reduction in the number of parameters indeed goes along with saving time and memory. It seemed to me that their method involves a non-negligible computational overhead, which I expect to at least partly negate its benefits in practice.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=H1gXZLzxeE¬eId=rJg_sMISLE"], "decision": "Accept"} |