AMSR / conferences_raw /neuroai19 /neuroai19_HJghQ7YU8H.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
9.3 kB
{"forum": "HJghQ7YU8H", "submission_url": "https://openreview.net/forum?id=HJghQ7YU8H", "submission_content": {"TL;DR": "We develop an approach to parcellate a hidden layer in DNN into functionally related groups, by applying spectral coclustering on the attribution scores of hidden neurons.", "keywords": ["DNN", "modularity"], "pdf": "/pdf/87ad96fc01ff4f2cdc79e49ddf7608c8e576286d.pdf", "authors": ["Jialin Lu", "Martin Ester"], "title": "Checking Functional Modularity in DNN By Biclustering Task-specific Hidden Neurons", "abstract": "While real brain networks exhibit functional modularity, we investigate whether functional mod- ularity also exists in Deep Neural Networks (DNN) trained through back-propagation. Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint groups of task-specific hidden neurons with the help of relatively well- studied neuron attribution methods. By saying task-specific, we mean the hidden neurons in the same group are functionally related for predicting a set of similar data samples, i.e. samples with similar feature patterns.\nWe argue that such groups of neurons which we call Functional Modules can serve as the basic functional unit in DNN. We propose a preliminary method to identify Functional Modules via bi- clustering attribution scores of hidden neurons.\nWe find that first, unsurprisingly, the functional neurons are highly sparse, i.e., only a small sub- set of neurons are important for predicting a small subset of data samples and, while we do not use any label supervision, samples corresponding to the same group (bicluster) show surprisingly coherent feature patterns. We also show that these Functional Modules perform a critical role in discriminating data samples through ablation experiment. ", "authorids": ["luxxxlucy@gmail.com", "ester@sfu.ca"], "paperhash": "lu|checking_functional_modularity_in_dnn_by_biclustering_taskspecific_hidden_neurons"}, "submission_cdate": 1568211747681, "submission_tcdate": 1568211747681, "submission_tmdate": 1572446983471, "submission_ddate": null, "review_id": ["Sye5YDafwB", "SygrnmfDvr", "SklVE_0YvH"], "review_url": ["https://openreview.net/forum?id=HJghQ7YU8H&noteId=Sye5YDafwB", "https://openreview.net/forum?id=HJghQ7YU8H&noteId=SygrnmfDvr", "https://openreview.net/forum?id=HJghQ7YU8H&noteId=SklVE_0YvH"], "review_cdate": [1569015682147, 1569297324534, 1569478699591], "review_tcdate": [1569015682147, 1569297324534, 1569478699591], "review_tmdate": [1570047565583, 1570047560927, 1570047549541], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper17/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper17/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper17/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJghQ7YU8H", "HJghQ7YU8H", "HJghQ7YU8H"], "review_content": [{"evaluation": "1: Very poor", "intersection": "2: Low", "importance_comment": "If they exist, finding functionally different groups of units in a DNN and using them to generate hypotheses in the brain is an important goal. The authors apply their technique to the second layer of a network trained to recognize digits. No evidence is provided that this network is functionally similar to the brain or that their technique would generalize to more complex \u2018sensory networks\u2019. Thus it is difficult to tell from the paper whether their potentially important technique is important.\n", "clarity": "2: Can get the general idea", "technical_rigor": "2: Marginally convincing", "intersection_comment": "They do mention that this method of clustering could be applied to neurons. Does not mention how this might work in practice or what would be gained or why it should be preferred over other clustering methods that have been applied to neurons. ", "rigor_comment": "Many decisions were not well motivated, justified or described at least in brief. For example: choice of attribution method, choice of network and layer, number of clusters. The space of parameters and network should have been explored more thoroughly to be convincing. Or at least justification for why their methods did not require more thorough testing.", "comment": "A potentially interesting and important technique for clustering. More motivation for the decisions in their study of this algorithm. Figures need to be improved in terms of contrast and relative sizes of figure labels. ", "importance": "2: Marginally important", "title": "Novel technique for clustering neurons but not carefully evaluated or motivated.", "category": "AI->Neuro", "clarity_comment": "Unclear whether units under study from MNIST were rectified or not. This would have a big difference on attribution depending on how sparse units were.\nFigure 1a and c are not well scaled and the contrast is very low. A log scale might have helped and larger labels. \nIn line 65 it is not clear what 'the constructed matrix' is (neuron-sample, e, a). \nSimilarly figure legends/labels in Figure 2 C are overlapping and tiny. Not vectorized so pixellated."}, {"evaluation": "2: Poor", "intersection": "2: Low", "importance_comment": "It is potentially useful to show if functional modularity exists in artificial neural networks. The authors claim they find functional modules by applying biclustering to the neuron-sample matrix but current analyses primarily evaluated on MNIST dataset are not clear and limited to tell whether the existence of functional modules.", "clarity": "2: Can get the general idea", "technical_rigor": "2: Marginally convincing", "intersection_comment": "Although the authors claim the usage of neuron attribution and biclustering can be applied to real neurons, there is no explanation of how exactly to apply. Other than that, this paper is mainly focusing on artificial neurons.", "rigor_comment": "There is no explanation of the selection of tested layer and no comparison between results of other layers. Similarly, the choice of the number of biclusters is also confusing and no explanation is provided. In figure 2c, it looks like multiple biclusters contribute to the same class, then why not change the number of biclusters and evaluate more results.", "comment": "It could be an interesting direction in terms of looking for functional modules in artificial neural networks via clustering methods but more rigorous analyses and clearer explanations are needed.", "importance": "2: Marginally important", "title": "interesting direction but more rigorous analyses are needed", "category": "AI->Neuro", "clarity_comment": "All figures are too small to follow and understand, especially figure 2c which has labels overlapped. The grey tone color of neuron-sample matrix makes figure 1 hard to visualize."}, {"title": "Improving interpretability by analyzing bulk-behavior of subsets of neuron", "importance": "4: Very important", "importance_comment": "The technique could potentially improve the interpretability of DNN models for feature understanding", "rigor_comment": "Some rigor is needed in the experimental evaluation. The DeepBind study is not clearly explained and lacks technical rigor or has not been explained clearly. The claim \"DeepBind model learns something from raw data\" is too vague.", "clarity_comment": "Technical details were omitted due to space constraints. The main idea and outline of implementation and methodology were explained. Other works considering attribution and importance scores have been cited clearly.", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "A technique is discussed with application towards analyzing DNN models which are prevalent in machine learning/AI. This technique aims to make models and features learned by DNNs more interpretable, and the principle is inspired by biological hypotheses of functional modularity.", "intersection": "4: High", "comment": "Several questions are unanswered. The experiments demonstrate preliminary investigations, at best. More rigorous experimentation and crossvalidation are needed.\nIs the phenomenon observed across different datasets with more diversity?\nIs the possibility of memorization/overfitting ruled out? Is that even a concern, or is memorization what functional modularity implicitly refers to?\nDoes the fact that the network is convolutional help in identifying the biclusters because now the neurons are more structured than fully-connected networks?\nThe choice of attribution scores etc. is not clearly justified.", "technical_rigor": "3: Convincing", "category": "Neuro->AI"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}