Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 8,446 Bytes
fad35ef |
1 |
{"forum": "B1eU47t8Ir", "submission_url": "https://openreview.net/forum?id=B1eU47t8Ir", "submission_content": {"keywords": ["Reinforcement learning", "multi-agent learning", "spiking neurons"], "pdf": "/pdf/849829b6daf10306da35c1b140d10e71925b94f3.pdf", "authors": ["Sneha Aenugu", "Abhishek Sharma", "Sasikiran Yelamarthy", "Hananel Hazan", "Philip.S.Thomas", "Robert Kozma"], "title": "Reinforcement learning with a network of spiking agents", "abstract": "Neuroscientific theory suggests that dopaminergic neurons broadcast global reward prediction errors to large areas of the brain influencing the synaptic plasticity of the neurons in those regions (Schultz et al.). We build on this theory to propose a multi-agent learning framework with spiking neurons in the generalized linear model (GLM) formulation as agents, to solve reinforcement learning (RL) tasks. We show that a network of GLM spiking agents connected in a hierarchical fashion, where each spiking agent modulates its firing policy based on local information and a global prediction error, can learn complex action representations to solve RL tasks. We further show how leveraging principles of modularity and population coding inspired from the brain can help reduce variance in the learning updates making it a viable optimization technique.", "authorids": ["saenugu@cs.umass.edu", "abhishekshar@umass.edu", "syelamarthi@umass.edu", "hananel@hazan.org.il", "pthomas@umass.edu", "rkozma@cs.umass.edu"], "paperhash": "aenugu|reinforcement_learning_with_a_network_of_spiking_agents"}, "submission_cdate": 1568211757587, "submission_tcdate": 1568211757587, "submission_tmdate": 1572546361065, "submission_ddate": null, "review_id": ["Byx5SIYtDr", "HJgYv7csvS", "BkxsJRMJOr"], "review_url": ["https://openreview.net/forum?id=B1eU47t8Ir¬eId=Byx5SIYtDr", "https://openreview.net/forum?id=B1eU47t8Ir¬eId=HJgYv7csvS", "https://openreview.net/forum?id=B1eU47t8Ir¬eId=BkxsJRMJOr"], "review_cdate": [1569457729852, 1569592160854, 1569824226863], "review_tcdate": [1569457729852, 1569592160854, 1569824226863], "review_tmdate": [1570047544715, 1570047533331, 1570047531279], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper41/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper41/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper41/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eU47t8Ir", "B1eU47t8Ir", "B1eU47t8Ir"], "review_content": [{"title": "Nice study of adapting neuroscience principles for solving RL tasks", "importance": "3: Important", "importance_comment": "The ideas presented here are novel as they show how neuroscience principles such as modularity and population coding can be adapted to achieve successful learning for RL tasks. ", "rigor_comment": "The technical information provided is sufficient for following the paper. However I wish the materials under 2.4. were explained in further depth in terms of equations defined in 2.1 - 2.3. ", "clarity_comment": "The paper is easy to follow and overall well written. ", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "The paper is well positioned in the intersection of AI and neuroscience, and shows how knowledge from neuroscience continues to inspire new novel frameworks for AI. ", "intersection": "4: High", "comment": "The authors develop multi-agent learning framework with spiking neurons to solve reinforcement learning tasks. Authors adapt generalized linear model (GLM) as spiking agent and use local learning rules modulated by global reward prediction error to train the network. In addition, authors complement the framework with brain inspired modular architecture and population coding to reduce the variance in learning updates. Authors applied the framework to two RL tasks to demonstrate its potential as viable optimization technique. \n\nThe value in this work is that authors adapt brain inspired principles such as spiking neuron, modularity and population coding into their framework and demonstrate each principle contributes to learning in RL tasks. The successful adaptation of neuroscience principles in this work is a good example of how neuroscience can promote a novel framework for AI. ", "technical_rigor": "3: Convincing", "category": "Neuro->AI"}, {"title": "A good illustration of the effect of architecture when learning with global error signals", "importance": "3: Important", "importance_comment": "Making the best use possible of global error signals may be very important for solving challenging machine learning tasks with a neurally plausible algorithm.", "rigor_comment": "The experiments are clear and demonstrate the efficacy of the two proposed variance reduction techniques.\n\nHowever, I see two technical issues that may limit the scope of this work:\n\n1) It seems the number of timesteps simulated is very low (5 for gridworld, if I'm interpreting \"spike train length of 5\" correctly), which makes it unclear how the networks described relate to event-driven spiking networks operating in continuous time, since the representation sparsity is so different and the information throughput of the cells in the paper seems limited. For example, using ensembles may have less of an effect on networks with more information throughput per cell. It would be good to compare an ensemble of 10 networks to a single network with 10 times as many cells, and a single network running with 10 times the temporal granularity.\n\n2) The networks tested were very small, and Fig. 3b shows cells struggling to learn from 200 inputs. This makes me unsure how well the proposed approach can scale. \n\nAlso, it seems the networks in Fig. 3a may not have finished training.", "clarity_comment": "The writing is generally quite good, though there are a few parts that are either a bit imprecise or hard to parse. e.g.\n\n- I don't understand the sentence that begins at the end of page 2.\n- The neuroscience of \"modular structures\" invoked in section 2.4 is vague.\n- The jump from \"population coding\" to \"ensemble model\" seems a bit unmotivated.\n- I don't understand the first sentence of the cartpole task description.\n- The term \"computational power\", in reference to [Maass, 1997] is vague.", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "Learning to accomplish standard tasks in the ML::RL community using global error signals, with neurally inspired variance reduction techniques seems like a good fit for this workshop.", "intersection": "5: Outstanding", "technical_rigor": "2: Marginally convincing", "category": "Neuro->AI"}, {"category": "Common question to both AI & Neuro", "title": "Nice idea; might not scale", "importance": "3: Important", "evaluation": "3: Good", "intersection": "4: High", "rigor_comment": "The idea of learning spike train generation through RL is interesting, however, it is questionable if this method will scale well to larger systems. In particular, as its number of communication partners increases, a neuron has to deal with a more non-stationary environment, making learning in systems of non-trivial size hard.\nThus, it is unlikely that a framework of this type would work in systems with more realistic sizes (i.e. number of neurons on the order of neurons in biological brains.) It would be nice if some results for larger systems (e.g. tens of neurons) could be shown...", "clarity": "4: Well-written", "intersection_comment": "The presented work is at the intersection of Neuro + AI", "technical_rigor": "2: Marginally convincing", "clarity_comment": "-", "importance_comment": "The idea of considering individual cells as \"firing policies\" might advance learning in spike-based systems."}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"} |