AMSR / conferences_raw /iclr20 /ICLR.cc_2020_Conference_B1eiJyrtDB.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
19.1 kB
{"forum": "B1eiJyrtDB", "submission_url": "https://openreview.net/forum?id=B1eiJyrtDB", "submission_content": {"title": "Improved Generalization Bound of Permutation Invariant Deep Neural Networks", "authors": ["Akiyoshi Sannai", "Masaaki Imaizumi"], "authorids": ["akiyoshi.sannai@riken.jp", "imaizumi@ism.ac.jp"], "keywords": ["Deep Neural Network", "Invariance", "Symmetry", "Group", "Generalization"], "TL;DR": "We theoretically prove that a permutation invariant property of deep neural networks largely improves its generalization performance.", "abstract": "We theoretically prove that a permutation invariant property of deep neural networks largely improves its generalization performance. Learning problems with data that are invariant to permutations are frequently observed in various applications, for example, point cloud data and graph neural networks. Numerous methodologies have been developed and they achieve great performances, however, understanding a mechanism of the performance is still a developing problem. In this paper, we derive a theoretical generalization bound for invariant deep neural networks with a ReLU activation to clarify their mechanism. Consequently, our bound shows that the main term of their generalization gap is improved by $\\sqrt{n!}$ where $n$ is a number of permuting coordinates of data. Moreover, we prove that an approximation power of invariant deep neural networks can achieve an optimal rate, though the networks are restricted to be invariant. To achieve the results, we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy.", "pdf": "/pdf/0f876144c871f8c31d4370cef2e7a60d9631da60.pdf", "paperhash": "sannai|improved_generalization_bound_of_permutation_invariant_deep_neural_networks", "original_pdf": "/attachment/6391e68e0a29b7e507547c1ba67e765e6e344d0a.pdf", "_bibtex": "@misc{\nsannai2020improved,\ntitle={Improved Generalization Bound of Permutation Invariant Deep Neural Networks},\nauthor={Akiyoshi Sannai and Masaaki Imaizumi},\nyear={2020},\nurl={https://openreview.net/forum?id=B1eiJyrtDB}\n}"}, "submission_cdate": 1569439459015, "submission_tcdate": 1569439459015, "submission_tmdate": 1577168254624, "submission_ddate": null, "review_id": ["rJx0u1p19B", "BkeJKjc0Kr", "HklWqgGrcr"], "review_url": ["https://openreview.net/forum?id=B1eiJyrtDB&noteId=rJx0u1p19B", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=BkeJKjc0Kr", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=HklWqgGrcr"], "review_cdate": [1571962742264, 1571887990554, 1572311176765], "review_tcdate": [1571962742264, 1571887990554, 1572311176765], "review_tmdate": [1575502153668, 1572972463130, 1572972463042], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper1481/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper1481/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper1481/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eiJyrtDB", "B1eiJyrtDB", "B1eiJyrtDB"], "review_content": [{"experience_assessment": "I have published in this field for several years.", "rating": "1: Reject", "review_assessment:_checking_correctness_of_experiments": "N/A", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "title": "Official Blind Review #3", "review": "This paper provides generalization bounds for permutation invariant neural networks where the learning problem is invariant to the permutation of input data. \n\nUnfortunately, the technical value of the content and its novelty is very limited since the proof reduces to a very basic argument that counts invariances (which is simply n! where n is the number of invariant dimensions) and uses a standard approach to give a generalization bound. Therefore, I don't think the results does not help us with better understanding of permutation invariant neural networks. \n\nUnfortunately, the paper has several typos and mistakes as well. Another non-technical issue is that apparently authors have removed the ICLR format and reduced margin to fit the paper in 10 pages which is against the spirit of page limit.\n\n***********************************\n\nAfter author rebuttals:\n\nAfter reading authors' response and reading the proofs, I realize that the formal proof is not trivial and requires more work that I assumed. However, I do not understand how this work can improve our understanding of permutation invariant networks. Therefore, I think the contributions are not significant enough for publication and my evaluation remains the same.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I do not know much about this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "N/A", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper derives a generalization bound for permutation invariant networks. The main idea is to prove that the bound is inversely proportional to the square-root of the number of possible permutations to the input. The key result is Theorem 3 that bounds the covering number of a neural network (defined under an approximation control bound, Thm 4) using the number of permutations. The paper proves the theorem by showing that the space of input permutations can reduced to group actions over a fundamental domain, and deriving a bound for the covering number of the fundamental domain (Lemma 1), which is then extended to derive the same for the neural network setting. For the permutation invariance setting, the fundamental domain is obtained via the sorting operator. \n\nPros:\n1. The paper appears to be mathematically rigorous, and at the same time, is straightforward to follow, with useful intuitions provided whenever required. \n2. The provided theoretical result perhaps extends the work on universal approximation theorem for permutation invariant networks in Sennai et al, and Maron et al., 2019. Further, the generalization bound for permutation invariance is new to my knowledge.\n\nCons:\n1. While, the proof appears to be novel for permutation invariance per se, however I do not think the main findings in this paper or the proof approach are sufficiently novel. For example, generalization bounds under invariances have been explored previously, perhaps the most related to this paper is [a] below that already shows (in a similar vein as this paper) that the bound decreases proportional to 1/\\sqrt(T), where T is the number of invariances used. While, that work uses affine transformations of the input from a base space for the invariances (which this paper calls fundamental domain), the current paper uses permutation invariance and thus gets the bound proportional to 1/sqrt(n!). In the context of this prior work, the contribution of this paper appears incremental. The paper should cite this work and contrast against the results and proof methods in it.\n\n[a] Generalization Error of Invariant Classifiers, Sokolic et al., ICML 2017.\n\n2. The paper has several typos and grammatical errors through out, which are easily fixable though!\n\nOverall, this paper is technically rigorous, and novel in its very specific context of deriving the generalization bounds for permutation invariant networks. However, in the broader context of invariances in general and their bounds, the contribution appears to be marginal. "}, {"experience_assessment": "I do not know much about this area.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "review_assessment:_checking_correctness_of_experiments": "N/A", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper presents a derivation of a generalization bound for neural networks designed specifically to deal with permutation invariant data (such as point clouds). The heart of the contribution is that the bound includes a 1/n! (i.e. 1 / (n-factorial)) factor to the major term, where n is the number of permutable elements there are in a data example (think: number of points in a point cloud). This term goes some way towards making the bound tight.\n\nThe 1/n! factor in the bound may be an interesting development but the novelty does appear to be limited. Also, the authors fail to discuss that -- as part of that same term -- there is a factor: (1 / (epsilon^p)), where p is the dimension of the input and epsilon is a small error term. As p is proportional to n, and epsilon is quite small, this term could well dominate the factorial in many practical settings. A discussion of the relation between these terms is appropriate and seems to be missing.\n\nClarity:\nIn general the paper is fairly well written, but there are multiple instances of missing articles and strange idiom violations (eg. p. 4, remark 1: \"such the bound\" versus \"such a bound\")\n\nMore seriously, the proof of Lemma 1 was quite hard to follow (esp. the second paragraph). I would suggest putting less emphasis on the relatively straightforward construction of the sorting mechanism in Propositions 2 and 3, and use the space to more clearly detail the proof of Lemma 1, which is, after all, the heart of the contribution.\n\nI also found the proof of proposition 4 too confusing to easily follow. What is the interpretation of the indices (1, ..., K) on the functions?\n\nFinally, I would have liked to see some interpretation of the findings in a discussion section (or in an extended conclusion). \n\nMinor issues:\n\n- First sentence of the abstract is difficult to parse and does not seem like an accurate assessment of the contribution of the paper. \n\n- Paragraph 2 of the introduction presents a sequence of argument whose logic seems inconsistent to me. There is a drift from a discussion of generalization of neural networks to a mention of work on the very distinct topic of the representational capacity of neural networks (i.e. universal approximation property of neural networks). The linking text \"To tackle the quesiton, ...\" is not appropriate.\n\n- Unlike Example 1, Example 2 (p.3) is not helpful in motivating the permutation invariant neural networks. The definition makes direct reference to Proposition 2 that will not be introduced for another 3 pages. \n\n- In Sec. 4.1, it seems like a phi symbol is used when I believe a null symbol was intended\n\n- Proposition 3: \"max( z_1, z_1 )\" should be \"max( z_1, z_2 )\" with the adjustment carrying through to the other side of the equals.\n"}], "comment_id": ["HklGNpG2jH", "BJgTNhz3ir", "rkxB0sz2iH", "Hkl4hU95jr", "ryxk9laYjH", "Hyg8wsedor", "rkeUodgdsr", "BygmiLgujB"], "comment_cdate": [1573821737647, 1573821492936, 1573821388732, 1573721771943, 1573666951264, 1573550941749, 1573550238282, 1573549723497], "comment_tcdate": [1573821737647, 1573821492936, 1573821388732, 1573721771943, 1573666951264, 1573550941749, 1573550238282, 1573549723497], "comment_tmdate": [1573821802262, 1573821492936, 1573821388732, 1573721771943, 1573666951264, 1573550941749, 1573550238282, 1573549723497], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/AnonReviewer3", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1481/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "We add discussion about $\\varepsilon^p$.", "comment": "We appreciate your critical comment.\nIn our updated paper, we add description about the point after Theorem 2 in our paper. For summary, the increasing speed of $n! \\varepsilon^p=n! \\varepsilon^{nD}$ in terms of $n$ is sufficiently fast for any $\\varepsilon$ and $D$. We appreciate if you check the point. "}, {"title": "We add the comparison with the paper (Sokolic et al., ICML 2017) in our paper.", "comment": "We clarify the difference between our paper and okolic et al., ICML 2017 in Section in the updated version of our paper. We are glad if you check the paragraph."}, {"title": "We updated the submitted paper.", "comment": "Updated points are as follow:\n- Add description to show the technical novelty and intuition of our paper. (Section 5 and 6)\n- We cite the paper Sokolic+ (2017) and discuss differences between it and our paper. (Section 5)\n- We show that the term $\\varepsilon^p$ does not provide a problem with a large $n$. (Section 3)\n- Correct several sentences and typos.\n- We omit the mistakenly loaded package and modify the format as following the template.\n"}, {"title": "Thank you for your response.", "comment": "Thank you for agreeing on the novelty of our work.\n\nAbout significance, we are confident that it is not easy to develop proof to derive the bound improved by n!. Technically speaking, to obtain the improved bound, we have to find n! subsets of functions WITHOUT overlapping with each other. To the aim, we introduce the notion of the fundamental domain and prove that a volume of overlapping has measure zero (Specifically, Lemma 1 in our paper). Without our techniques, the improvement by n! is a folklore, but not theoretical analysis. Hence, we believe that it is significant to develop such the technique and show the improved bound.\nIf you are not agree with the importance of our achievement, please give us references which show the improvement rigorously."}, {"title": "Thanks for your reponse", "comment": "I agree with you that generalization of invariant DNNs are not studied before. However, my main concern is the significance of the work. Basically, covering numbers count the number of different functions in the hypothesis class where the notion of different depends on some metric. Now, if there the input has invariance, one can take advantage of that and reduce this total number of different functions by n! Even though this very specific problem has not been studied before, it is not clear to me that this contribution is significant enough to be accepted at ICLR."}, {"title": "Thank you for your accurate comment. ", "comment": "Thank you for your accurate comment. Especially, we would appreciate your evaluation for our technical contributions.\n\nWe also thank you for the introduction of the previous research[a]. We confirmed that their main result is very similar to ours. The superiority of our results is as follows. At first, we construct explicit invariant deep neural networks, which guarantee practical and useful methods. One of them is a new one that can achieve the same objectives as DeepSets (Zaheer 2018). Since the paper [a] is written with an abstract framework, our paper can provide useful knowledge. Secondly, our analysis is not limited to classification but can be applied to general learning methods including regression. Thirdly, our results provide a more specific analysis of permutation invariant networks, which can be used for future specific expansion and analysis."}, {"title": "Could you give me some evidence or references?", "comment": "Thank you for your comment.\n\nAs mentioned in our paper, we developed several novel techniques as follow: (i) we prove a correspondence between invariant DNNs and DNNs on the fundamental domain, and (ii) we derive a covering number for a functional space which is sensitive to a volume of the domain of functions. To the best of our knowledge, such techniques are not commonly used in the analysis of deep neural networks.\n\nCould you give me some pieces of evidence or references which support your opinion that says our analysis follows very basis arguments? As your comment does not provide such clear evidence, we cannot find a way to discuss it with you.\n\nAbout the format of our paper, we mistakenly load the \"fullpage\" package, hence the margin of our paper is changed. About the point, we have not refutation and will modify it.\n"}, {"title": "Though the rate of $\\varepsilon$ is critical, the generalization bound can be tight with large $n$.", "comment": "Thank you for your critical opinion.\n\nAs you mentioned, the order of $\\varepsilon$ is very important, thus we will add the discussion. We expect that our generalization bound may get loose when $n$ is not sufficiently large. In contrast, when $n$ is reasonably large, our bound becomes tight since $n!$ increases rapidly rather than $\\varepsilon^p$.\n\nAbout the clarity, we will modify our description and correct the typos."}], "comment_replyto": ["HklWqgGrcr", "BkeJKjc0Kr", "B1eiJyrtDB", "ryxk9laYjH", "rkeUodgdsr", "BkeJKjc0Kr", "rJx0u1p19B", "HklWqgGrcr"], "comment_url": ["https://openreview.net/forum?id=B1eiJyrtDB&noteId=HklGNpG2jH", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=BJgTNhz3ir", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=rkxB0sz2iH", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=Hkl4hU95jr", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=ryxk9laYjH", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=Hyg8wsedor", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=rkeUodgdsr", "https://openreview.net/forum?id=B1eiJyrtDB&noteId=BygmiLgujB"], "meta_review_cdate": 1576798724356, "meta_review_tcdate": 1576798724356, "meta_review_tmdate": 1576800912161, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This work proves a generalization bound for permutation invariant neural networks (with ReLU activations). While it appears the proof is technically sound and the exact result is novel, reviewers did not feel that the proof significantly improves our understanding of model generalization relative to prior work. Because of this, the work is too incremental in its current form.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1eiJyrtDB&noteId=vfONePFDJh"], "decision": "Reject"}