AMSR / conferences_raw /neuroai19 /neuroai19_r1eA7XtILS.json
mfromm's picture
Upload 3539 files
fad35ef
{"forum": "r1eA7XtILS", "submission_url": "https://openreview.net/forum?id=r1eA7XtILS", "submission_content": {"keywords": ["neuroscience", "reward processing", "reinforcement learning", "psychiatric disorders"], "pdf": "/pdf/d08884e548bf5f3a93bf2bbbd96aacc7b523cf63.pdf", "authors": ["Baihan Lin", "Guillermo Cecchi", "Djallel Bouneffouf", "Jenna Reinen", "Irina Rish"], "title": "Reinforcement Learning Models of Human Behavior: Reward Processing in Mental Disorders", "abstract": "Drawing an inspiration from behavioral studies of human decision making, we propose here a general parametric framework for a reinforcement learning problem, which extends the standard Q-learning approach to incorporate a two-stream framework of reward processing with biases biologically associated with several neurological and psychiatric conditions, including Parkinson's and Alzheimer's diseases, attention-deficit/hyperactivity disorder (ADHD), addiction, and chronic pain. For the AI community, the development of agents that react differently to different types of rewards can enable us to understand a wide spectrum of multi-agent interactions in complex real-world socioeconomic systems. Empirically, the proposed model outperforms Q-Learning and Double Q-Learning in artificial scenarios with certain reward distributions and real-world human decision making gambling tasks. Moreover, from the behavioral modeling perspective, our parametric framework can be viewed as a first step towards a unifying computational model capturing reward processing abnormalities across multiple mental conditions and user preferences in long-term recommendation systems. ", "authorids": ["doerlbh@gmail.com", "gcecchi@us.ibm.com", "djallelinfo@gmail.com", "jenna.reinen@ibm.com", "rish@us.ibm.com"], "paperhash": "lin|reinforcement_learning_models_of_human_behavior_reward_processing_in_mental_disorders"}, "submission_cdate": 1568211750151, "submission_tcdate": 1568211750151, "submission_tmdate": 1571805940434, "submission_ddate": null, "review_id": ["B1dSn6qPS", "SklH1NAcvH", "r1xNs7WsvH"], "review_url": ["https://openreview.net/forum?id=r1eA7XtILS&noteId=B1dSn6qPS", "https://openreview.net/forum?id=r1eA7XtILS&noteId=SklH1NAcvH", "https://openreview.net/forum?id=r1eA7XtILS&noteId=r1xNs7WsvH"], "review_cdate": [1569541183517, 1569543133423, 1569555355681], "review_tcdate": [1569541183517, 1569543133423, 1569555355681], "review_tmdate": [1570047540624, 1570047539723, 1570047538530], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper23/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper23/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper23/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["r1eA7XtILS", "r1eA7XtILS", "r1eA7XtILS"], "review_content": [{"title": "Very interesting idea, but results are not very convincing - and benchmarks may not be suitable for comparison to RL algorithms", "importance": "3: Important", "importance_comment": "This work has important implications for the psychiatric research community, and may be for thinking about reward normalization / reshaping in deep / tabular RL. However, the results are not yet totally convincing as it\u2019s relevant to only one simple task in a tabular setting.\n", "rigor_comment": "The method proposed is quite simple. There is a need for more experimentation with a wider array of tasks in order to be able to facilitate the author\u2019s claims, since the authors do not fully elucidate the connection to RL in more relevant tasks. It\u2019d be interesting to explore this idea in deep RL with commonly used tasks for the authors to be able to make the claim that they 'outperform state of the art algorithms'.\n\nOverall, the authors seem overly enthusiastic about the prospects of some of the results. The source of the performance gain appears to be possibly from reward normalization / reshaping. While there is a connection, it\u2019s not clear if psychiatric disorders are / should be the source of inspiration for doing better reward normalization / reshaping.", "clarity_comment": "While the motivation of the paper is clear, the method is not explicitly described and requires some digging to understand. Notations are not entirely clear as some deviate from standard RL notations. For instance, important algorithmic details are neglected such as how value tables are updated? Is the task tabular or approximated using deep methods? Was there an eligibility trace? Also, task could be better explained as it is non-obvious to most readers. What are the justifications for comparing with this task, which seems inherently biased to benefit algorithms that learn multi-modal distributions rather than point estimates? There is also confusion about how the numbers were generated in the end. Also, there is not enough explanations to help the reader understand the figures, especially given that the task is highly specialized and described quickly in words without explanations for the way it\u2019s decided.\n", "clarity": "2: Can get the general idea", "evaluation": "3: Good", "intersection_comment": "The authors propose using Q-learning as a framework for modeling individuals with different known reward preferences in psychiatric disorders. The intersection is there, although the authors are drawing connections in specific areas where it\u2019s lacking. For instance, RL can be characterized generally by methods of doing value updates or propagating information about rewards through history. However, the authors are using the framework to examine a very simple, two-choice task. \n", "intersection": "3: Medium", "comment": "The authors propose modeling psychiatric disorders with reinforcement learning, through tracking both a positive as well as a negative q-function. There are presentation issues, and more analyses, tests are needed to convince the reader of the authors claims that that psychiatric disorders can serve a source of inspiration for designing better RL algorithms.\n", "technical_rigor": "2: Marginally convincing", "category": "Common question to both AI & Neuro"}, {"title": "Very interesting work . Definitely deserves a platform for further discussion. ", "importance": "4: Very important", "importance_comment": "This work is important. Task domain should be expanded. ", "rigor_comment": "The work is convincing to the extend to which one can judge such brief articles. ", "clarity_comment": "The article has been very well written.", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "The article uses refined AI metrics to address neurological disorders. The overall approach could be very rewarding for both fields. ", "intersection": "3: Medium", "technical_rigor": "3: Convincing", "category": "AI->Neuro"}, {"title": "Innovative ideas in computational psychiatry but quite preliminary results", "importance": "3: Important", "importance_comment": "This intriguing study proposes to modify the classical Q-learning paradigm by splitting the reward into two streams with different parameters, one for positive rewards and one for negative rewards. This model allows for more flexibility in modelling human behaviors in normal and pathological states. Although innovative and promising, the work is quite preliminary and would benefit from comparison and validation with real human behavior.", "rigor_comment": "No comparison with human data.", "clarity_comment": "The figures are hard to parse because of the very short captions. One needs to go see Appendix C to understand what the model used (SQL) consists in.", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "The work has promising implications for computational psychiatry, but probably not for RL at this point.", "intersection": "3: Medium", "comment": "It would be good to compare and fit the proposed models to real human/primate behavior in normal and pathological conditions and make testable predictions. Also, it would be very interesting to use these models to predict situations that might trigger maladaptive behaviors, by finding scenarios in which the pathological behavior becomes optimal.\n ", "technical_rigor": "3: Convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}