Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
{"forum": "B1eWVQYULB", "submission_url": "https://openreview.net/forum?id=B1eWVQYULB", "submission_content": {"TL;DR": "We propose an extension to LFADS capable of inferring spike trains to reconstruct calcium fluorescence traces using hierarchical VAEs.", "keywords": ["calcium imaging", "LFADS", "variational autoencoders", "dynamics", "recurrent neural networks"], "authors": ["Luke Y. Prince", "Blake A. Richards"], "title": "Inferring hierarchies of latent features in calcium imaging data", "abstract": "A key problem in neuroscience and life sciences more generally is that the data generation process is often best thought of as a hierarchy of dynamic systems. One example of this is in-vivo calcium imaging data, where observed calcium transients are driven by a combination of electro-chemical kinetics where hypothesized trajectories around manifolds determining the frequency of these transients. A recent approach using sequential variational auto-encoders demonstrated it was possible to learn the latent dynamic structure of reaching behaviour from spiking data modelled as a Poisson process. Here we extend this approach using a ladder method to infer the spiking events driving calcium transients along with the deeper latent dynamic system. We show strong performance of this approach on a benchmark synthetic dataset against a number of alternatives.", "authorids": ["luke.prince@utoronto.ca", "blake.richards@mcgill.ca"], "pdf": "/pdf/f239eaae246a4e1f1240eda247ebffc09102a194.pdf", "paperhash": "prince|inferring_hierarchies_of_latent_features_in_calcium_imaging_data"}, "submission_cdate": 1568211752640, "submission_tcdate": 1568211752640, "submission_tmdate": 1572469932257, "submission_ddate": null, "review_id": ["SJez2rKLwB", "B1ecexdYPH", "rJgq8R7ovH"], "review_url": ["https://openreview.net/forum?id=B1eWVQYULB¬eId=SJez2rKLwB", "https://openreview.net/forum?id=B1eWVQYULB¬eId=B1ecexdYPH", "https://openreview.net/forum?id=B1eWVQYULB¬eId=rJgq8R7ovH"], "review_cdate": [1569260969710, 1569452018465, 1569566290194], "review_tcdate": [1569260969710, 1569452018465, 1569566290194], "review_tmdate": [1570047562952, 1570047553419, 1570047536311], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper29/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper29/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper29/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eWVQYULB", "B1eWVQYULB", "B1eWVQYULB"], "review_content": [{"evaluation": "3: Good", "intersection": "4: High", "importance_comment": "Calcium imaging represents an important technological step forwards in our ability to record large populations of neurons. However, the increase in spatial resolution comes with a decrease in temporal resolution. Developing new algorithms for inferring the underlying neural activity at timescales shorter than fluorescence decay dynamics is crucial to taking full advantage of this data and the scientific insights it can lead to.", "clarity": "3: Average readability", "technical_rigor": "4: Very convincing", "intersection_comment": "The Ladder LFADS model is a combination of several recent neural network architectures that address an existing neuroscience problem in a new way.", "rigor_comment": "The proposed algorithm is a rigorous model-based approach to the problem.", "comment": "Strengths:\nThe Ladder LFADS model does a good job of uncovering underlying dynamics and neural firing rates in simulated data. The combined approach outperforms a two-step approach where inference of latent dynamics follows a deconvolution step. \n\nAreas for improvement:\nAnother useful benchmark would be replacing LFADS with a simple linear dynamical system. Though this would clearly fail in the case of Lorenz attractor dynamics, it seems like a natural comparison.\n\nIt will also be interesting to see how well this method works on real neural data from different brain regions. In regions like motor cortex, where dynamical systems models have been used for many years now, it seems this model will perform well. It is unclear how the model will perform, however, in sensory areas like visual cortex where activity is arguably more related to external inputs than internal dynamics.\n\nI would also be curious to know how well the model works without using the ladder component; this seems like another natural comparison that could further motivate the modeling choice.", "importance": "4: Very important", "title": "LFADS+fluorescence decay model effectively learns dynamics of spiking activity", "category": "AI->Neuro", "clarity_comment": "The exposition was mostly clear, but I found it difficult to decipher the relationship between LFADS, stacked VAEs and ladder VAEs (as I have not heard of the last two before). Is the ladder feature necessary for this model to work, or does it merely improve the results? The motivation for using the VLAE approach is that it learns disentangled hierarchical features, but again it's not clear to me exactly how that is relevant here. Is it because the latent dynamics need to be disentangled from the calcium dynamics? A few clarifying sentences in the introduction of section 2 could go a long way to clearing up these ambiguities for me."}, {"title": "Applications of VAEs to synthetic calcium imaging traces", "importance": "5: Astounding importance", "importance_comment": "Calcium imaging allows the simultaneous visualization of activity from thousands of neurons. Recordings from wider fields can lead to a lower SNR. Developing unsupervised methods that can reveal underlying dynamics or otherwise denoise the data is of critical importance for neuroscience.", "rigor_comment": "The results in this paper look promising, but they come from (I believe) entirely synthetic data.\n- Has the model been applied to real traces, and is there a way to reliability evaluate the quality of the output (spike train inference, underlying dynamics)?\n- Line 75: How much white noise was added to the synthetic traces? How robust was the model to noise?\n- Line 75: When generating the data, how does adding some noise to the time-constant affect the model?\n- Line 69: How was the parameter for L2 regularization determined?", "clarity_comment": "The paper is well-composed for the most part, I have some minor comments:\n- Line 17: period missing in \"brain activity Pandarinath\"\n- Lines 34-35: \"We choose the VLAE approach ... in contrast to stacked VAEs or ladder VAEs\". Is VLAE not a ladder VAE?\n- Line 52: Reference for GCAMP6 time constant\n- Line 65: open bracket )", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "This paper is an example of applying AI (previously published unsupervised variational autoencoders) to the analysis of calcium imaging traces, a common neuroscience recording technique.", "intersection": "4: High", "comment": "This paper combines two previously published unsupervised VAE-based models and adapts them to infer the underlying dynamics of a synthetic calcium imaging dataset. While the results look excellent on the synthetic data, it's difficult to evaluate how great this method would be without seeing its performance on real traces.", "technical_rigor": "3: Convincing", "category": "AI->Neuro"}, {"title": "Simple modification of well known model; more work is needed to pinpoint the advantages.", "importance": "2: Marginally important", "importance_comment": "Estimating the dynamics of the latents while incorporating calcium dynamics is an important consideration. Estimating the calcium kernel with the dynamics is an interesting challenge. However, here the kernel is known by the authors, and it seems like they essentially stick on a known kernel at the output of LFADS. Moreover, given that the authors use relatively clean synthetic data, and the performance gains are minimal at best, it remains to be seen whether this approach has any advantages.", "rigor_comment": "The approach seems principled and technically sound. I commend the authors on their sincere and well-performed benchmarking. The actual results are just not much better than the stepwise deconvolution + LFADS, which is unfortunate. The authors could have added more noise to their dynamics / poisson observations, in order to clarify the regimes in which their approach may work better than the stepwise approach. The method has promise, though, and it would be interesting to see what it looks like on real data.\n\n", "clarity_comment": "Well written and well presented.", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "Augmenting a machine learning model to estimate dynamics in neural data, although only synthetic at this point.", "intersection": "4: High", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"} |