Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "B1eh4mYIUB", "submission_url": "https://openreview.net/forum?id=B1eh4mYIUB", "submission_content": {"keywords": ["biological plausibility", "dopaminergic plasticity", "allele frequency", "neural net evolution"], "pdf": "/pdf/8b51f18d77751e93590a3ad5391b6151e1da8a46.pdf", "authors": ["Sruthi Gorantla", "Anand Louis", "Christos H. Papadimitriou", "Santosh Vempala", "Naganand Yadati"], "title": "Biologically Plausible Neural Networks via Evolutionary Dynamics and Dopaminergic Plasticity", "abstract": "Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain. Here we propose that backpropagation can happen in evolutionary time, instead of lifetime, in what we call neural net evolution (NNE). In NNE the weights of the links of the neural net are sparse linear functions of the animal\u2019s genes, where each gene has two alleles, 0 and 1. In each generation, a population is generated at random based on current allele frequencies, and it is tested in the learning task through minibatches. The relative performance of the two alleles of each gene is determined, and the allele frequencies are updated via the standard population genetics equations for the weak selection regime. We prove that, under assumptions, NNE succeeds in learning simple labeling functions with high probability, and with polynomially many generations and individuals per generation. NNE is also tested on MNIST with encouraging results. Finally, we explore a further version of biologically plausible ANNs (replacing backprop) inspired by the recent discovery of dopaminergic plasticity. ", "authorids": ["sruthi@comp.nus.edu.sg", "anandl@iisc.ac.in", "christos@columbia.edu", "vempala@gatech.edu", "y.naganand@gmail.com"], "paperhash": "gorantla|biologically_plausible_neural_networks_via_evolutionary_dynamics_and_dopaminergic_plasticity"}, "submission_cdate": 1568211763780, "submission_tcdate": 1568211763780, "submission_tmdate": 1572525578991, "submission_ddate": null, "review_id": ["HJxfX8JdPB", "S1eEM1EiPB", "r1xM1ArMvS"], "review_url": ["https://openreview.net/forum?id=B1eh4mYIUB¬eId=HJxfX8JdPB", "https://openreview.net/forum?id=B1eh4mYIUB¬eId=S1eEM1EiPB", "https://openreview.net/forum?id=B1eh4mYIUB¬eId=r1xM1ArMvS"], "review_cdate": [1569351193860, 1569566476472, 1568984538111], "review_tcdate": [1569351193860, 1569566476472, 1568984538111], "review_tmdate": [1570047557395, 1570047536094, 1570047534495], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper56/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper56/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper56/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eh4mYIUB", "B1eh4mYIUB", "B1eh4mYIUB"], "review_content": [{"evaluation": "2: Poor", "intersection": "4: High", "importance_comment": "The largest motivation seems to be the biological implausibility of backpropagation. However, many studies have shown that all aspects of backprop can be, and most likely are, realized in biological networks (error calculation, weight transport, etc. - e.g., Lillicrap & Santoro, 2019; Akrout et al. 2019). Therefore, this motivation is not enough alone.", "clarity": "3: Average readability", "technical_rigor": "2: Marginally convincing", "intersection_comment": "To what extent human abilities are represented at the genomic level vs. learned within a lifetime is certainly an interesting biological question, but it\u2019s applicability to machine learning has yet to be shown convincingly.", "rigor_comment": "While the technical proofs are useful, the performance on MNIST is not particularly convincing. Additionally, the results seem quite noisy and would benefit from many repeated runs.\n\nWhat would be more interesting is to address how learning could take place through a combination of evolution- and life-time mechanisms, as opposed to a purely evolutionary-time mechanism. \n\nThe inclusion of learning based on dopaminergic plasticity seems quite arbitrary. Additionally, the authors cite one biological paper, but methods like this have been used for decades in one way or another (e.g. https://openreview.net/forum?id=r1lrAiA5Ym)", "importance": "3: Important", "title": "Neural networks can be instantiated and trained in evolutionary-time as sparse linear functions of genes", "category": "Common question to both AI & Neuro", "clarity_comment": "The approach is certainly introduced in interesting way and the methods are reasonably easy to follow."}, {"title": "review", "importance": "2: Marginally important", "importance_comment": "I don't believe this paper makes a compelling enough argument to cause readers to rethink what is learned in evolutionary rather than developmental time.\n", "rigor_comment": "I believe the algorithm functions as proposed. I suspect it is not a reasonable model for learning genetic influence on synapses.", "clarity_comment": "I understood the core idea, but felt that the idea itself had not been carefully thought through.\n", "clarity": "3: Average readability", "evaluation": "2: Poor", "intersection_comment": "This was mainly a proposal for evolutionary learning of biological network weights. The idea was tested using an artificial neural network, but was not otherwise strongly connected to machine learning.", "intersection": "3: Medium", "comment": "I did not find the proposal that individual synapses are learned via evolution to be compelling. This claim is seemingly contradicted by strong experimental evidence of learning at all scales during animals lifespans, and by an observation of the number of bits in the genome vs. the number of synapses in the brain. It would require stronger evidence, and discussion of the potential barriers, for me to take this proposal more seriously.", "technical_rigor": "3: Convincing", "category": "Common question to both AI & Neuro"}, {"title": "Interesting line of work, but not convincing enough results and not clear enough", "importance": "3: Important", "importance_comment": "It is interesting to think about how evolution interacts with learning that takes place during an organism\u2019s lifetime. While there is fruitful work to be done here, the motivation made by the authors needs a little more work. For instance, what types of learning do we expect to be encoded in an animal\u2019s genes as opposed to its acquired synapses? Claiming that it\u2019s more \u2018biologically plausible\u2019 is not good enough: there are many plausible models that do not resort to evolution over generations. ", "rigor_comment": "The fact that in Fig 1b it takes a while for the n=2000 model to learn anything suggests there may be significant variability in the results when repeated many times. Some repeated runs of the algorithm for a given number of genes would be helpful. \n\nThe addition of the dopaminergic neural nets is not well enough explained and introduced to warrant inclusion. It appears just as a random addition to the model. It\u2019s not clear how the specific dopamine-related timing result they mention is incorporated in their model. More generally, reward-modulated plasticity is very well explored, why not just use these results?\n\nA result of 83% test accuracy with a non-linear network on MNIST is not so encouraging\u2026 given that linear networks can perform better. Some simpler task might be worth investigating to get a better intuition for what in this model works and what doesn\u2019t, before trying MNIST.\n", "clarity_comment": "There are many small points and omitted details that make the presented work hard to evaluate. For instance:\n\n- Is the beta at line 67 the same as 1-gamma at line 119?\n- Line 53: by \u20180/1 allele distribution\u2019 you mean a deterministic distribution?\n- I\u2019m confused about the set S. Is it a small set of 2000 elements of MNIST, sampled uniformly from all of MNIST, or only from digits 0-4? The text seems to suggest S is just 2000 elements from MNIST, while figure 1 presents results from digits 0-4 or the full set. Citing training results of 83% on S and 79% test on full MNIST, in the text, are both referring to Fig 1b? It took a number of passes through the text to figure out what was being plotted in relation to the analysis that was done.\n- The model could be more explicitly defined, though I know space is limited in this submission\n", "clarity": "2: Can get the general idea", "evaluation": "2: Poor", "intersection_comment": "There is room here for these type of models to benefit both theoretical neuroscience, and AI. If something shows more promise on something like MNIST it may be of benefit to AI. If some of the details about how an evolutionary algorithm interacts with \u2018within-lifetime\u2019 learning then it could benefit neuro.", "intersection": "4: High", "comment": "Definitely an interesting line of work. But the results need to be presented more clearly to really evaluate the worth of this particular model. \n\nSome demonstration that it does indeed work according to the theory provided (e.g. on simpler regression problems) would be useful to get the idea off the ground.\n\nThe title and results are also a combination of two ideas (the evolutionary algorithm, and the sign matched 'dopamine' inspired updates). I would focus on one idea at a time or explain how they do go together", "technical_rigor": "2: Marginally convincing", "category": "Common question to both AI & Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"} |