AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1e7hs05Km.json
mfromm's picture
Upload 3539 files
fad35ef
{"forum": "B1e7hs05Km", "submission_url": "https://openreview.net/forum?id=B1e7hs05Km", "submission_content": {"title": "Efficient Exploration through Bayesian Deep Q-Networks", "abstract": "We propose Bayesian Deep Q-Networks (BDQN), a principled and a practical Deep Reinforcement Learning (DRL) algorithm for Markov decision processes (MDP). It combines Thompson sampling with deep-Q networks (DQN). Thompson sampling ensures more efficient exploration-exploitation tradeoff in high dimensions. It is typically carried out through posterior sampling over the model parameters, which makes it computationally expensive. To overcome this limitation, we directly incorporate uncertainty over the value (Q) function. Further, we only introduce randomness in the last layer (i.e. the output layer) of the DQN and use independent Gaussian priors on the weights. This allows us to efficiently carry out Thompson sampling through Gaussian sampling and Bayesian Linear Regression (BLR), which has fast closed-form updates. The rest of the layers of the Q network are trained through back propagation, as in a standard DQN. We apply our method to a wide range of Atari games in Arcade Learning Environments and compare BDQN to a powerful baseline: the double deep Q-network (DDQN). Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster: in less than 5M\u00b11M samples for almost half of the games to reach DDQN scores while a typical run of DDQN is 50-200M. We also establish theoretical guarantees for the special case when the feature representation is fixed and not learnt. We show that the Bayesian regret is bounded by O\udbff\udc12(d \\sqrt(N)) after N time steps for a d-dimensional feature map, and this bound is shown to be tight up-to logarithmic factors. To the best of our knowledge, this is the first Bayesian theoretical guarantee for Markov Decision Processes (MDP) beyond the tabula rasa setting.", "keywords": ["Deep RL", "Exploration Exploitation", "DQN", "Bayesian Regret", "Thompson Sampling"], "authorids": ["kazizzad@uci.edu", "anima@caltech.edu"], "authors": ["Kamyar Azizzadenesheli", "Animashree Anandkumar"], "TL;DR": "Using Bayesian regression for the last layer of DQN, and do Thompson Sampling for exploration. With Bayesian Regret bound", "pdf": "/pdf/b453171eabd2ab1a37f6ee304a4b3f9d7e2965c9.pdf", "paperhash": "azizzadenesheli|efficient_exploration_through_bayesian_deep_qnetworks", "_bibtex": "@misc{\nazizzadenesheli2019efficient,\ntitle={Efficient Exploration through Bayesian Deep Q-Networks},\nauthor={Kamyar Azizzadenesheli and Animashree Anandkumar},\nyear={2019},\nurl={https://openreview.net/forum?id=B1e7hs05Km},\n}"}, "submission_cdate": 1538087851512, "submission_tcdate": 1538087851512, "submission_tmdate": 1545355403265, "submission_ddate": null, "review_id": ["H1gi8TCPhX", "Hkgl5de63Q", "ByecO2R6j7", "rygc6qQ82m"], "review_url": ["https://openreview.net/forum?id=B1e7hs05Km&noteId=H1gi8TCPhX", "https://openreview.net/forum?id=B1e7hs05Km&noteId=Hkgl5de63Q", "https://openreview.net/forum?id=B1e7hs05Km&noteId=ByecO2R6j7", "https://openreview.net/forum?id=B1e7hs05Km&noteId=rygc6qQ82m"], "review_cdate": [1541037395447, 1541372040455, 1540381809568, 1540926146477], "review_tcdate": [1541037395447, 1541372040455, 1540381809568, 1540926146477], "review_tmdate": [1543330602536, 1541533763732, 1541533763490, 1541533763006], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1e7hs05Km", "B1e7hs05Km", "B1e7hs05Km", "B1e7hs05Km"], "review_content": [{"title": "Major clarity issues", "review": "Update after feedback: I would like to thank the authors for huge work done on improving the paper. I appreciate the tight time constrains given during the discussion phase and big steps towards more clear paper, but at the current stage I keep my opinion that the paper is not ready for publication. Also variability of concerns raised by other reviewers does not motivate acceptance.\n\nI would like to encourage the authors to make careful revision and I would be happy to see this work published. It looks very promising. \n\nJust an example of still unclear parts of the paper: the text between eq. (3) and (4). This describes the proposed method, together with theoretical discussions this is the main part of the paper. As a reader I would appreciate this part being written detailed, step by step.\n=========================================================\n\nThe paper proposes the Bayesian version of DQN (by replacing the last layer with Bayesian linear regression) for efficient exploration. \n\nThe paper looks very promising because of a relatively simple methodology (in the positive sense) and impressive results, but I find the paper having big issues with clarity. There are so many mistakes, typos, unclear statements and a questionable structure in the text that it is difficult to understand many parts. In the current version the paper is not ready for publication. \n\nIn details (more in the order of appearance rather than of importance):\n1. It seems that the authors use \u201csample\u201d for tuples from the experience replay buffer and draws W from its posterior distribution (at least for these two purposes), which is extremely confusing\n2. pp.1-2 \u201cWe show that the Bayesian regret is bounded by O(d \\sqrt{N}), after N time steps for a d-dimensional feature map, and this bound is shown to be tight up-to logarithmic factors.\u201d \u2013 maybe too many details for an abstract and introduction and it is unclear for a reader anyway at that point\n3. p.1 \u201cA central challenge in reinforcement learning (RL) is to design efficient exploration-exploitation tradeoff\u201d \u2013 sounds too strong. Isn\u2019t the central challenge to train an agent to get a maximum reward? It\u2019s better to change to at least \u201cOne of central challenges\u201d\n4. p.1 \u201c\u03b5-greedy which uniformly explores over all the non-greedy strategies with 1 \u2212 \u03b5 probability\u201d \u2013 it is possible, but isn\u2019t it more conventional for an epsilon-greedy policy to take a random action with the probability epsilon and acts greedy with the probability 1 \u2013 epsilon? Moreover, later in Section 2 the authors state the opposite \u201cwhere with \u03b5 probability it chooses a random action and with 1 \u2212 \u03b5 probability it chooses the greedy action based on the estimated Q function.\u201d\n5. p.1 \u201cAn action is chosen from the posterior distribution of the belief\u201d \u2013 a posterior distribution is the belief\n6. p.2 \u201cand follow the same target objective\u201d \u2013 if BDQN is truly Bayesian it should find a posterior distribution over weights, whereas in DDQN there is no such concept as a posterior distribution over weights, therefore, this statement does not sound right\n7. p.2 \u201cThis can be considered as a surrogate for sample complexity and regret. Indeed, no single measure of performance provides a complete picture of an algorithm, and we present detailed experiments in Section 4\u201d \u2013 maybe too many details for introduction (plus missing full stop at the end)\n8. p.2 \u201cThis is the cost of inverting a 512 \u00d7 512 matrix every 100,000 time steps, which is negligible.\u201d \u2013 doesn\u2019t this depend on some parameter choices? Now the claim looks like it is true unconditionally. Also too many details for introduction\n9. p.2 \u201cOn the other hand, more sophisticated Bayesian RL techniques are significantly more expensive and have not lead to large gains over DQN and DDQN.\u201d \u2013 it would be better to justify the claim with some reference\n10. Previous work presented in Introduction is a bit confusing. If the authors want to focus only on Thompson Sampling approaches, then it is unclear, why they mentioned OFU methods. If they mention OFU methods, then it is unclear why other exploration methods are not covered (in Introduction). It is better to either move OFU methods to Related Work completely, or to give a taste of other methods (for example, from Related Work) in Introduction as well\n11. p.3 \u201cConsider an MDP M as a tuple <X , A, P, P0, R, \u03b3>, with state space X , action space A, the transition kernel P, accompanied with reward function of R, and discount factor 0 \u2264 \u03b3 < 1.\u201d \u2013 P_0 is not defined\n12. p.4 \u201cA common assumption in DNN is that the feature representation is suitable for linear classification or regression (same assumption in DDQN), therefore, building a linear model on the features is a suitable choice.\u201d \u2013 the statement is more confusing than explaining. Maybe it is better to state that the last fully connected layer, representing linear relationship, in DQN is replaced with BLR in the proposed model\n13. p.5 In eq. (3) it is better to carry definition of $\\bar{w}_a$ outside the Gaussian distribution, as it is done for $\\Xi_a$\n14. p.5 The text between eq. (3) and (4) seems to be important for the model description and yet it is very unclear: how $a_{TS}$ is used? \u201cwe draw $w_a$ follow $a_{TS}$\u201d \u2013 do the authors mean \u201cfollowing\u201d (though it is still unclear with \u201cfollowing\u201d)? What does notation $[W^T \\phi^{\\theta} (x_{\\tau})]_{a_{\\tau}}$ denote? Which time steps do the authors mean?\n15. p.5 The paragraph under eq. (4) is also very confusing. \u201cto the mean of the posterior A.6.\u201d \u2013 reference to the appendix without proper verbal reference. Cov in Algorithm 1 is undefined, is it equal to $\\Xi$? Notation in step 8 in Algorithm 1 is too complicated.\n16. Algorithm 1 gives a vague idea about the proposed algorithm, but the text should be revised, the current version is very unclear and confusing\n17. pp.5-6 The text of the authors' attempts to reproduce the results of others' work (from \"We also aimed to implement...\" to \"during the course of learning and exploration\") should be formalised\n18. p. 6 \"We report the number of samples\" - which samples? W? from the buffer replay?\n19. p. 6 missing reference for DDQN+\n20. p. 6 definition of SC+ and references for baselines should be moved from the table caption to the main text of the paper\n21. p. 6 Table 3 is never discussed, appears in a random place of the text, there should be note in its reference that it is in the appendix\n22. p.6 Where is the text for footnotes 3-6?\n23. p.6 Table 2 may be transposed to fit the borders\n24. p.6 (and later) It is unclear why exploration in BDQN is called targeted\n25. p.7 Caption of Figure 3 is not very good\n26. p.7 Too small font size of axis labels and titles in plots in Figure 3 (there is still a room for 1.5 pages, moreover the paper is allowed to go beyond 10 pages due to big figures)\n27. p.7 Figure 3. Why Assault has different from the others y-axis? Why in y-axis (for the others) is \"per episode\" and x-axis is \"number of steps\" (wise versa for Assault)?\n27. Section 5 should go before Experiments\n28. p. 7 \u201cWhere \u03a8 is upper triangular matrix all ones 6.\u201d \u2013 reference 6 should be surrounded by brackets and/or preceded by \"eq.\" and it is unclear what \u201call ones\u201d means especially given than the matrix in eq. (6) does not contain only ones\n29. p. 7 \u201cSimilar to the linear bandit problems,\u201d \u2013 missing citation\n30. p. 7 PSRL appears in the theorem, but is introduced only later in Related work\n31. p. 7 \u201cProof: in Theorem. B\u201d \u2013 proof is given in Appendix B?\n32. p. 8 Theorem discussion, \u201cgrows not faster than linear in the dimension, and \\sqrt(HT)\u201d \u2013 unclear. Is it linear in the product of dimension (of what?) and \\sqrt(HT)?\n33. p.8 \u201cOn lower bound; since for H = 1\u2026\u201d \u2013 what on lower bound?\n34. p.8 \u201cour bound is order optimal in d and T\u201d \u2013 what do the authors mean by this?\n35. p.8 \"while also the state of the art performance bounds are preserved\" - what does it mean?\n36. p.8 \"To combat these shortcomings, \" - which ones?\n37. p.8 \"one is common with our set of 15 games which BDQN outperformS it...\" - what is it?\n38. p.9 \"Due to the computational limitations...\" - it is better to remove this sentence\n39. p.9 missing connection in \"where the feature representation is fixed, BDQN is given the feature representation\", or some parts of this sentence should be removed?\n40. p.9 PAC is not introduced\n41. pp.13-14 There is no need to divide Appendices A.2 and A.3. In fact, it is more confusing than helpful with the last paragraph in A.2 repeating, sometimes verbatim, the beginning of the first paragraph in A.3\n42. In the experiments, do the authors pre-train their BDQN with DQN? In this case, it is unfair to say that BDQN learns faster than DDQN if the latter is not pre-trained with DQN as well. Or is pre-training with DQN is used only for hyperparameter tuning?\n43. p.14 \u201cFig. 4 shows that the DDQN with higher learning rates learns as good as BDQN at the very beginning but it can not maintain the rate of improvement and degrade even worse than the original DDQN.\u201d \u2013 it seems that the authors tried two learning rates for DDQN, for the one it is clear that it is set to 0.0025, another one is unclear. The word \u201coriginal\u201d is also unclear in this context. From the legend of Figure 4 it seems that the second choice for the learning rate is 0.00025, but it should be stated in the text more explicitly. The legend label \u201cDDQN-10xlr\u201d is not the best choice either. It is better to specify explicitly the value of the learning rate for both DDQN\n44. p.15 \u201cAs it is mentioned in Alg. 1, to update the posterior distribution, BDQN draws B samples from the replay buffer and needs to compute the feature vector of them.\u201d \u2013 B samples never mentioned in Algorithm 1\n45. p.15 \u201cduring the duration of 100k decision making steps, for the learning procedure,\u201d \u2013 i) \u201cduring \u2026 duration\u201d, ii) what did the authors meant by \u201cdecision making steps\u201d and \u201cthe learning procedure\u201d?, and iii) too many commas\n46. p.15 \u201cwhere $\\tilde{T}^{sample}$, the period that of $\\tilde{W}$ is sampled our of posterior\u201d \u2013 this text does not make sense. Is \u201cour\u201d supposed to be \u201cout\u201d? \u201c\u2026 the number of steps, after which a new $\\tilde{W}$ is sampled from the posterior\u201d?\n47. p.15 \u201c$\\tilde{W}$ is being used just for making Thompson sampling actions\u201d \u2013 could the authors be more specific about the actions here?\n48. p.16 \u201cIn BDQN, as mentioned in Eq. 3, the prior and likelihood are conjugate of each others.\u201d \u2013 it is difficult to imagine that an equation would mention anything and eq. (3) gives just the final formula for the posterior, rather than the prior and likelihood\n49. p.16 The formula after \u201cwe have a closed form posterior distribution of the discounted return, \u201d is unclear\n50. p.17 \u201cwe use \u03c9 instead of \u03c9 to avoid any possible confusion\u201d \u2013 are there any differences between two omegas?\n51. p.17 what is $\\hat{b}_t$?\n\nThere are a lot of minor mistakes and typos also, I will add them as a comment since there is a limit of characters for the review.\n\n\n\n\n\n\n", "rating": "4: Ok but not good enough - rejection", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}, {"title": "-", "review": "This paper proposes a method for more efficient exploration in RL by maintaining uncertainty estimates over the learned Q-value function. It is comparable to Double DQN (DDQN) but uses its learned uncertainty estimates with Thompson sampling for exploration, rather than \\epsilon-greedy. Empirically the method is significantly more sample-efficient for Atari agents than DDQN and other baselines.\n\n=====================================\n\nPros:\n\nIntroduction and preliminaries section give useful background context and motivation. I found it easy to follow despite not having much hands-on background in RL.\n\nProposes a novel (to my knowledge) exploration method for RL which intuitively seems like it should work better than \\epsilon-greedy exploration. The method looks simple to implement on top of existing Q-learning based methods and has minimal computational and memory costs.\n\nStrong empirical performance as compared with appropriate baselines -- especially to DDQN(+) where the comparison is direct with the methods only differing in exploration strategy.\n\nGood discussion of practical implementation issues (architecture, hyperparameters, etc.) in Appendix A.\n\n=====================================\n\nCons/questions/suggestions/nitpicks:\n\nAlgorithm 1 line 11: \u201cUpdate W^{target} and Cov\u201d -- how? I see only a brief mention of how W^{target} is updated in the last paragraph of Sec. 3, but it\u2019s not obvious to me how the algorithm is actually implemented from this, and I don\u2019t see any mention of how Cov is updated.\n\nAlgorithm 1: I\u2019d like to know more about how sample-efficiency varies with T^{sample} given that T^{sample}>1 is doing something other than true Thompson sampling. Does the regret bound hold with T^{sample}>1? Also, based on the discussion in Appendix A, approximating the episode length seems to be the goal in choosing a setting of T^{sample} -- so why not just always resample at the beginning of each episode instead of using a constant T^{sample}?\n\nTheorem 1: I don\u2019t understand the Theorem statement (let alone the proof) or what it tells me about the proposed BDQN algorithm. First, as a \u201cnon-RL person\u201d I wasn\u2019t familiar with PSRL, but I see it\u2019s later defined as \u201cposterior-sampling RL\u201d. This should be clarified earlier for readers that aren't familiar with this line of work. But this still doesn\u2019t fully explain what \u201cthe PSRL on \\omega\u201d means. If it means you follow an existing PSRL algorithm to learn \\omega, then how does the theorem relate to the proposed algorithm? I'm sure I'm missing something but the connection was unclear to me.\n\nTheorem 1: there\u2019s a common abuse of big-O notation here that should be fixed for a formal statement -- O(f(n)) by definition is a set corresponding to an upper-bound, so this should probably be written as g(n) \\in O(f(n)) rather than g(n)<=O(f(n)). (Or alternatively, just rewritten without big-O notation.)\n\nTable 2: should be reformatted to make it clear that the rightmost 3 columns are not additional baseline methods (e.g. adding a vertical line would be good enough).\n\nAppendix A.8, \u201cA discussion on safety\u201d: this section should either be much more fleshed out or removed. I didn\u2019t understand the statement at the end at all -- \u201cone can... come up with a criterion for safe RL just by looking at high and low probability events\u201d -- huh? What is even meant by \u201csafe RL\u201d in this context? Nothing is referenced.\n\nOverall, much of the writing seems quite rushed with many typos and grammatical errors throughout. This should be cleaned up for a final version. To give a particularly common example, there are many inline references that do not fit in the sentence and distract from the flow -- these should be changed to \\citep.\n\nHow does this compare with \u201cA Distributional Perspective on Reinforcement Learning\u201d (Bellemare et al., ICML 2017) both in terms of the approach and performance? The proposed method seems to at least superficially share motivation with this work (and uses the same Atari benchmark, as far as I can tell) but it is not discussed or compared.\n\n=====================================\n\nOverall, though many parts of the paper could use significant cleanup and clarification, the paper proposes a novel yet relatively simple and intuitive approach with strong empirical performance gains over comparable baselines.", "rating": "6: Marginally above acceptance threshold", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}, {"title": "Lacks novelty, experiments incomplete, results misinterpreted. Clear reject.", "review": "The paper proposes performing Thompson Sampling (TS) using a Bayesian Linear Regressor (BLR) as the action-value function the inputs of which are parameterized as a deterministic neural net. The authors provide a regret bound for the BLR part of their method and provide a comparison against Double Deep Q-Learning (DDQL) on a series of computer games.\n\nStrengths:\n * The paper presents some strong experimental results.\n\nWeaknesses:\n * The novelty of the method falls a little short for a full-scale conference paper. After all, it is only a special case of [3] where the random weights are restricted to the top-most layer and the posterior is naturally calculated in closed form. Note that [3] also reports a proof-of-concept experiment on a Thompson Sampling setting.\n\n * Related to the point above, the paper should have definitely provided a comparison against [3]. It is hard to conclude much from the fact that the proposed method outperforms DDQN, which is by design not meant for sample efficiency and effective exploration. A DDQN with Dropout applied on multiple layers and Thompson Sampling followed as the policy would indeed be both a trivial design and a competitive baseline. Now the authors can argue what they provide on top of this design and how impactful it is.\n\n * If the main concern is sample efficiency, another highly relevant vein of research is model-based reinforcement learning. The paper should have provided a clear differentiation from the state of the art in this field as well.\n\n * Key citations to very closely related prior work are missing, for instance [1,2].\n\n * I have hard time to buy the disclaimers provided for Table 2. What is wrong with reporting results on the evaluation phase? Is that not what actually counts? \n\n * The appendix includes some material, such as critical experimental results, that are prerequisite for a reviewer to make a decision about its faith. To my take, this is not the Appendices are meant for. As the reviewers do not have to read the Appendices at all, all material required for a decision has to be in the body text. Therefore I deem all such essential material as invalid and make my decision without investigating them.\n\nMinor:\n * The paper has excessively many typos and misspellings. This both gives negative signals about its level of maturity and calls for a detailed proofread.\n\n[1] R. Dearden et al., Bayesian Q-learning, AAAI, 1998\n\n[2] N. Tziortziotis et al., Linear Bayesian Reinforcement Learning, IJCAI, 2013\n\n[3] Y. Gal, Z. Ghahramani, Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, ICML, 2016", "rating": "2: Strong rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Appealing idea; poor delivery.", "review": "Summary: The paper proposes an approximate Thompson Sampling method for value function learning when using deep function approximation.\n\nResearch Context: Thompson Sampling is an algorithm for sequential decision making under uncertainty that could provide efficient exploration (or lead to near optimal cumulative regret) under some assumptions. The most critical one is the ability to sample from the posterior distribution over problem models given the already collected data. In most cases, this is not feasible, so we need to rely on approximate posteriors, or, informally, on distributions that somehow assign reasonable mass to plausible models. The paper tries to address this.\n\nMain Idea: In particular, the idea here is to simultaneously train a deep Q network while choosing actions based on samples for the linear parameters of the last layer. On one hand, this seems sensible: a distribution over the last layer weights provides an approximate posterior over Q functions, as needed, and a linear model could work after learning an appropriate representation. On the other, this seems doable: there are close-form updates for Bayesian linear regression when using a Gaussian prior and likelihood, as proposed in the paper.\n\nPros:\n- Simple and elegant algorithm.\n- Strong empirical results in standard benchmarks.\n\nCons:\n- The paper is very poorly written; the number of typos is countless, and in general the paper is quite hard to read and to follow.\n- I share the concerns expressed in the first public comment regarding the correctness of the theoretical statements (Theorem 1) or, at least, the proposed proofs. Notation is very hard to parse, and the meaning of some claims is not clear ('the PSRL on w', 'we use w instead of w', 'the estimated \\hat{b_t}', '\\pi_t(x, a) = a = ...'). I'd appreciate a clear proof strategy outline. In addition, it'd be quite useful if the authors could highlight the specific technical contributions of the proposed analysis, and how they rely on and relate to previous analyses (Abbasi-Yadkori et al., De la Pe\u00f1a et al., Osband et al., etc).\n- I think Table 1, Figure 1, and Figure 2 are not particularly useful and could be removed.\n\nQuestions:\n- Last year, there was a paper published in ICLR [1] that proposed basically the same algorithm for contextual bandits. They reported as essential to also learn the noise levels for different actions, while in this work \\sigma_\\epsilon is assumed known, fixed, and common across actions (see paragraph to the left of Figure 2). I'm curious why not learn it for each action using an Inverse-Gamma prior as proposed in [1], or if this was actually something you tried, and what the performance consequences were. In principle, my hunch is it should have a strong impact on the amount of exploration imposed by the algorithm (see Equation 3) over time.\n- A minor comment: the dimension 'd' in Theorem 1 is a *design choice* in the proposed algorithm. Of course, Theorem 1 relies on some assumptions that may be harder to satisfy for decreasing values of 'd', but I think some further comment can be useful as some readers may think the theorem is indeed suggesting we should set as small 'd' as possible...\n- More generally, what are the expected practical consequences of the mismatch between the proposed algorithm (representation is learned alongside with linear TS) and the setup in Theorem 1 (representation is fixed or known, and prior and likelihood are not misspecified)?\n\nConclusion:\nWhile definitely a promising direction, the paper requires significant further work, writing improvement, and polishing. At this point, I'm unable to certify that the theoretical contribution is correct.\n\n\nI'm willing to change my score if some of the comments above are properly addressed. Thanks.\n\n\n\n\n\n[1] - Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling.", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["B1g98Je8xE", "HJe03g6WxE", "r1xz_N_kgN", "SklccwDh0X", "B1eFHHDnAQ", "H1lXah_90X", "H1e2hKSRpm", "SkgdhF4KT7", "HkxtAs4Ka7", "SkehU_4tp7", "B1xfqv4Kp7", "HJl3HrEYpX", "HJgKVVEtTX", "HkehNOmf3X", "Bye-m-3UiX"], "comment_cdate": [1545105234396, 1544831158149, 1544680554455, 1543432082167, 1543431489407, 1543306426528, 1542506932151, 1542175152357, 1542175697292, 1542174803565, 1542174602134, 1542174020298, 1542173745170, 1540663347827, 1539911960627], "comment_tcdate": [1545105234396, 1544831158149, 1544680554455, 1543432082167, 1543431489407, 1543306426528, 1542506932151, 1542175152357, 1542175697292, 1542174803565, 1542174602134, 1542174020298, 1542173745170, 1540663347827, 1539911960627], "comment_tmdate": [1545105234396, 1544938632758, 1544680554455, 1543432082167, 1543431504799, 1543306426528, 1542506932151, 1542180302268, 1542175697292, 1542174803565, 1542174602134, 1542174020298, 1542173745170, 1540663407330, 1539911960627], "comment_readers": [["ICLR.cc/2019/Conference/Paper699/Authors", "everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Area_Chair1", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper699/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "the assumption", "comment": "Dear Akshay\n\nI totally agree with you that the d^{{H+1}/2} dependent regret upper bound does not evidence the sampling efficiency, and I also agree it is not reasonable to just assume the mentioned assumption is satisfied. \n\nI am currently working on the cases that this assumption is rigorously satisfied (e.g. following optimism under what condition this is satisfied). So far, the current stage of the proof, O(d\\sqrt(T)) with the independence assumption, is trivial and moreover the upper bound O(d^{{H+1}/2} \\sqrt(T)) is not tight, therefore the current state of the theoretical contribution is not complete yet. I would like to thank you again for your thoughtful and helpful comment on this paper.\n\nCheers"}, {"title": "update", "comment": "Dear Akshay\n\nThanks again for bringing up this important point. \n\nUpdate:\n\nFor the general case, with a modification in the current proof, I get a regret upper bound of \nd^((H+1)/2)\\sqrt(T). \n\nWith an additional prior of \nsum_{i<=t}||\\phi(x_i,\\pi_t(x_i))||^2_{\\xi_t^-1}=O(1) \nI get d\\sqrt(T).\n\nI'll update the text with this regard. \n\nCheers"}, {"title": "bias", "comment": "Dear Akshay\n\nI really appreciate this comment. I could not find any explanation of how I missed this obvious point. As you can imagine, under this assumption, the problem becomes trivial. Let me relax this assumption ( which results in bias estimators for all the time step except for time step H) and see how the bound changes. Thanks again.\n\nCheers"}, {"title": "Agreed that this is not a special case of Dropout TS", "comment": "I also disagreed with the reviewer's assessment here. I don't agree with the reviewer's choice of relevant literature or demanded baselines. This is why a fourth reviewer was brought in, and this review will be weighted accordingly."}, {"title": "Posterior Sampling RL", "comment": "Dear Ian,\n\nWe would like to thank you for kindly leaving a comment on our AnonReviewer1's thread.\n\nRegarding the analysis of model-free PSRL; BDQN up to some modification reduces to PSRL algorithm if the prior and posterior are Gaussian. As we mentioned in the paper, since maintaining the posterior distribution can be computationally intractable, in our empirical study, we approximate the posterior with a Gaussian distribution. We apologize if it is not clear from the paper. We will emphasize more on this statement.\n\nAs we mentioned before, in model-based PSRL, we generally specify the problem with a prior over MDP models, as well as a likelihood over state transitions and reward processes. As you also agreed, we are given these quantities before interacting with the environment. Consequently, in model-free PSRL, we specify the problem with a prior over Q and likelihood over the return. Similarly, we are given these quantities before interacting with the environment to pursue PSRL\n\nRegarding the discount factor \\gamma; the per episode regret is bounded above with O(1 / (1-gamma) ), but not the overall regret. Following your statement, the regret is upper bounded as O(T / (1-gamma) ) which is linear in T. We derived a sublinear regret of \\tild{O}(\\sqrt{T} ||(1,\\gamma,\\gamma^2,...,\\gamma^{H-1})||_2 ).\n\n\nWe would like to appreciate the thoughtful comments on our paper and also thank you for taking the time to leave a comment on our AnonReviewer1 review regarding the drop-out discussion. \n\nSincerely \nAuthors"}, {"title": "This is an interesting line of research, and it would be impactful to get this answer right!", "comment": "Dear Ian, \n\nWe also like to thank you for the time you dedicated and kindly read the revised version of our paper. As you mentioned, upon you and our four reviewers\u2019 thoughtful comments, we improved the clarity of the presentation. We believe that the merit of openreview helps authors to deliver polished and influential research contributions. With this regard, we would be grateful to you if you could leave a comment for our AnonReviewer1 regarding drop-out and its correspondence to Thompson sampling. We already referred our AnonReviewer1 to the discussion in Appendix A of your BootstrapDQN paper. Moreover, we made an additional empirical study on four Atari games to show the deficiency of dropout in providing reasonable exploration and exploitation tradeoff. We would appreciate it if you could take a time and leave a comment in the corresponding thread.\n\nRegarding the confidence set C_t: The confidence C_t is mentioned in section 4.\n\nRegarding the discount factor: Both theorem 1 and theorem 2 hold for any discount factor 0<= \\gamma<=1. We get a tight bound if we replace \\sqrt(H) with a smaller quantity ||1,\\gamma, gamma^2,...,\\gamma^(H-1)||_2. We addressed this in terms of a remark in the latest version. \n\nRegarding the choices of prior/likelihood: We apologize that the choices of prior and likelihood were not clear from the main text. We would like to restate that we do not specify the choices of prior and likelihood. They can be anything as long as they satisfy the set of assumptions on page 6, e.g., sub-Gaussianity. \n\nRegarding the inconsistency: Conditioned on any data history H_t, the posterior over w is identical to the posterior of the optimal parameter. Similar theoretical justification is also deployed in Russo et al. 2014, \u201cLearning to Optimize Via Posterior Sampling\u201d page 10, as well as many of your papers, e.g., \u201c(More) Efficient Reinforcement Learning via Posterior Sampling\u201d lemma 1. \n\nRegarding your statement on \u201cthis is not a model-free algorithm if we know the likelihood\u201d: Theoretically, given a w, the knowledge of the likelihood does not determine a model. These algorithms, also neither require to construct a model nor to store any MDP model parameters (e.g., transition kernel).\n\nIn model-based PSRL, we generally specify the problem with a prior over MDP models, as well as a likelihood over state transitions and reward processes. These quantities are given and known in the model-based PSRL framework. Consequently, in model-free PSRL we specify the problem with a prior over Q and likelihood over the return where similarly these quantities are given and known.\n\nWe agree with you that when the prior and the likelihood functions are arbitrary, then computing the posterior, as well as sampling from it, can be computationally hard. As you know, this is a principled issue with Bayesian methods. It is also an unsolved issue for model-based methods, e.g., in continuous MDPs. While we are excited about this line of research, we left the study of relaxing this computation complexity for the future work.\n\nWe would like to thank you again for taking the time to leave thoughtful comments and we appreciate your positive assessment of this line of research. We would be also grateful to you if you could leave a comment on our AnonReviewer1 review regarding the drop-out discussion. \n\nSincerely,\nAuthors\n"}, {"title": "Dropout, as another randomized exploration method", "comment": "Dear reviewer\nWe would like to bring your attention to the final results of the empirical study you kindly suggested. As we mentioned in our previous comment, we implemented the dropout version of the DDQN and compared its performance against BDQN, DDQN, DDQN+, as well as the random policy (the policy which chooses actions uniformly at random). We included the result of this empirical study to out paper. \n\nWe would like to mention that we did not observe much gain beyond the performance of the random policy. Please see the discussion in the related works as well as in the section A.6). In the following we provided a summary of the empirical results on four randomly chosen games;\n\nGame\t\t BDQN\tDDQN\tDDQN+\t DropoutDDQN RandomPolicy\nCrazyClimber 124k 84k 102k 19k 11k\nAtlantis 3.24M 40k 65k 7.7k 13k\nEnduro 1.12k 0.38k 0.32k 0.27k 0\nPong 21 18.8 21 -18 -20.7\n\nAs a conclusion, despite the arguments in the prior works on the deficieny of the dropout methods in providing reasonable randomizations in RL problems, we also empirically observed that dropout results in a performance worse than the performance of a plain epsilon-greedy DDQN and somewhat similar performance as the random policy's on at least four Atari games. \n\n\n\n\n"}, {"title": "The paper presents some strong experimental results.", "comment": "We appreciate the thoughtful and detailed comments by the reviewer. In the following, we addressed the comments raised by the reviewer which helped us to revise the draft and make our paper more accessible.\n\n\nRegarding the Dropout: Drop-out, as another randomized exploration method is proposed by Gal & Ghahramani (2016), but Osband et al. (2016) argue about the deficiency of the estimated uncertainty and hardness in driving suitable exploration and exploitation trade-off from it. Please look at Appendix A in Osband et. al. 2016 https://arxiv.org/pdf/1602.04621.pdf. \nOsband el. Al. 2016 states that \u201cThe authors of Gal & Ghahramani (2016) propose a heteroskedastic variant which can help, but does not address the fundamental issue that for large networks trained to convergence all dropout samples may converge to every single datapoint... even the outliers.\u201d This issue with dropout methods that they result in ensemble of many models but all almost the same is also observed in the adversarial attacks and defense community, e.g. Dhillon et. al. 2018 \u201c the dropout training procedure encourages all possible dropout masks to result in similar mappings.\u201d Furthermore, after the reviewer\u2019s comment, we also implemented DDQN-dropout and ran it on 4 randomly chosen Atari games (among those we ran for less than 50M time steps (please consider that these experiments are expensive)). We observed that the randomization in DDQN-dropout is deficient and results in a performance worse than DDQN on these 4 Atari games (the experiments are half way through and are still running, the statement is based on DDQN performance after seeing half of the data). We will add these further study in the final version of the paper.\n\nRegarding the model-based approaches: Model-based approaches are provably sample efficient, but they are mainly not scalable to the high dimensional settings. \n\nRegarding the mentioned papers: We appreciate your suggestions and added both of the mentioned papers to our paper.\n\n\nRegarding the Table2: As it is mentioned in the draft, the reported scores in table 2 are the scores directly reported in their original papers. As discussed in the paper, we are not aware of the detailed implementation and environment choices in this paper since their implementation codes are not publicly available. In order to see why the comparison through table 2 can be problematic, please, for example, look at the reported scores if DDQN in Bootstrapped DQN (Deep Exploration via Bootstrapped DQN ) and compare them with the reported score in the original DDQN paper. You can see that there is a huge gap. For example, some of them are as follows; \nAline(2.9k,4k), Amidar(.7k,2.1k), Assault(5k,7k), Atlantis(65k,770k) where the first set of scores are DDQN scores in the original DDQN paper, and the second set of scores are the scores of DDQN, reported in the Bootstrap DQN paper. As you can see, direct reporting scores is not the best way of comparison and reasoning. Regarding the evaluation phase, we agree with the reviewer that scores in the evaluation phase are important when asymptotic performance is concerned but they are not sufficiently informative when regret and sample complexity are the measures of interest.\n\n\nWe hope that the reviewer would kindly consider our replies, especially about the Dropout methods, and take them into account when assessing the final scores.\n\n\n"}, {"title": "Updated draft-factored Appendix", "comment": "Dear Ian,\nI would like to inform you that I uploaded the revised version of the draft as promised. Based on your and the reviewers' great comments, I significantly improved the Appendix and with high probability, I think it is now much more clear and factored. I would be grateful to have your insightful feedbacks again. \n"}, {"title": "Simple and elegant algorithm + Strong empirical results in standard benchmarks.", "comment": "We would like to thank the reviewer for the helpful comments. We appreciate the comments and believe that they significantly improve the clarity of our paper. \n\n\nAt first, we would like to apologize for the typos. We addressed them based on the extensive review by AnonRev2.\n\nRegarding the proof: Based on Ian Osband\u2019s and AnonRev2 comments, we polished the proof and believe it is now in a good shape. We would like to bring the reviewer\u2019s attention to the revised version of the Appendix where we further improved the clarity of the expressions and derivations.\n\n\nRegarding the noise model: As the reviewer mentioned, the approach in the mentioned proceeding paper [1] is similar to BDQN (BDQN was publicly available as a workshop paper before [1] appearance). As the reviewer also mentioned, it is an interesting approach to estimate the noise level which could be helpful in practice. We would like to bring the reviewer\u2019s attention to our Lemma 1 and Eq3 where we show that the noise level is just a constant scale of the confidence interval which vanishes as O(1/sqrt(t)). Therefore, the noise level does not have a critical and deriving effect when the confidence interval is small. To be more accurate, it is also worth noting that one can deploy related Bernstein-based approaches as in \u201cMinimax Regret Bounds for Reinforcement Learning\u201d for the noise estimation, but this approach results in the more complicated algorithm as well as an additional fixed but intolerably big term (cubic in the dimension) in the regret. \n\n\nRegarding the design choice \u201cd\u201d: We apologize for the lack of clear clarification that \u201cd\u201d is a design choice in the theorem and cannot be set to a small value unless the assumption holds. We restated this statement in the revised draft and made it clear.\n\n\nRegarding the connection between the linear TS and theorem 1: If the prior and posterior are both Gaussian, similar to the linear Bandits case studied in Russo2014 \u201cLearning to optimize via posterior sampling\u201d, the linear TS is equivalent to the PSRL algorithm mentioned in Theorem 1.\n\n\nWe appreciate the reviewer\u2019s comments and believe they help to improve the current state of the paper. We would be grateful if the reviewer could look at the new uploaded draft.\n"}, {"title": "Significant improvement in the draft.", "comment": "We would like to thank the reviewer for taking the time to leave a clear, precise, and thoughtful review. We appreciate the reviewer\u2019s comments and believe that they significantly improved the clarity of our paper. In the following, we describe how we addressed them.\n\n\n1) We agree with the reviewer that the use of sample for both experience and Thompson sampling is confusing. It was not revealed to us until the reviewer mentioned that. We appreciate this comment and addressed it in the new draft.\n\n2) We restated it in the abstract\n3) We changed the statement\n4) The reviewer is totally right. We fixed the explanation.\n5) Fixed\n6) We apologize for the confusion. In high level, we motivate the fact that both BDQN and DDQN approach a similar linear problem (same target value) but one via Bayesian linear regression other via linear regression.\n\n7) Full stop added. We appreciate the comment.\n8) We elaborated more on this statement and expressed that it is an additional computation cost of BDQN over DQN for Atari games. \n\n9) That is a great point. We added a reference. \n10) As the reviewer knows, most of theoretical advances and analyses in the literature are mainly dedicated to OFU based approaches. So far, most of the guarantees for Thompson sampling methods are not as tight as OFU for many problems. In this part of our paper, we try to motivate Thompson sampling as a method which works better than OFU in practice even though the guarantees aren\u2019t as tight. In addition, we want to motivate that TS based methods are also computation more favorable than OFU. Furthermore, later in the theoretical study, we also prove the regret bound for OFU, but as the reviewer can see, the OFU based methods in the linear models, as usual, consist of solving NP-hard problems due to their inner loop optimization steps.\n\n11) Thanks for pointing it out, we added it to the draft.\n12) We rephrase it as you suggested. \n13) That is a great point. Fixed\n14) The definition of a_TS is restated in the new draft. W is the matrix fo most recent sampled weight vectors..\n15) We addressed that after the submission. \n16) We added a more detailed explanation. \n17) We restated these. \n18) Addressed\n19) Fixed\n20) Addressed\n21) Addressed\n22) The footnotes unexpectedly disappeared. We appreciate your precise comment and addressed it.\n23) Fixed\n24) Addressed\n25) Great point, addressed\n26) Yes, we will address this \n27) That is a great point, it is a typo, we addressed it.\n27) Changed\n28) Fixed \n29) Cited\n30) We provided the detailed algorithm. \n31) Fixed\n32) Restated\n33) Restated\n34) It means the T and d dependency in upper bound matches the T and d dependency in the lower bound. Please let us know if it is required to rephrase it.\n\n35) Fixed\n36) Addressed\n37) It means that one of the games among those 5 games in levine et al. 2017 is common with our set of 15 games.\n\n38) Removed\n39) Addressed\n40) Addressed\n41) Fixed\n42) The pretrained DQN is just used to ease the hyperparameter tuning. BDQN itself start from scratch.\n43) Fixed\n44) Fixed\n45) Fixed\n46) Fixed\n47) Restated\n48) We had the definition of the prior and likelihood in our previous submission where one of our reviewers mentioned that it is obvious and no need to express the prior and likelihood for Bayesian linear regression.\n\n49) Fixed\n50) Fixed\n51) Fixed\n\nWe would like to thank the reviewer again for taking the time to leave the thoughtful and precise review. We applied the rest of the comments directly on to the revised draft. These comments significantly helped us to improve our paper.\n"}, {"title": "Strong empirical performance as compared with appropriate baselines - More clear description for the main Algorithm.", "comment": "\nWe would like to thank the reviewer for taking the time to leave a thoughtful review. In the following, we addressed the comments raised by the reviewer which helped us to revise the draft and make it more accessible.\n\nRegarding the posterior updates: We apologize for the lack of clarity. The Cov is computed by applying Bayesian Linear regression on a subset of randomly sampled tuples in the experience replay buffer. We expressed it in a more detail in Eq.3 of the new draft. Regarding the updates of target models: Similiar to DQN paper, we update the target feature network every T^{target} and set its parameters to the feature network parameter. We also update the linear part of the target model, i.e., w^{target}, every T^{Bayes target} and set the weights to the mean of the posterior distribution.\n\nRegarding the Thompson Sampling in Algorithm 1: As the reviewer also pointed out, Thompson sampling in multi-arm bandits can be more efficient if we sample from the posterior distribution at each time step, i.e., T^{sample}=1. But as it has been proven in our Theorem1 as well as Osband et al. 13, and Osband et al. 14, the true Thompson sampling can have T^{sample}>1. Moreover, as it is expressed in Appendix A.5, as long as T^{sample} is in the same order of the horizon length, the choice of T^{sample} does not critically affect the performance. As the reviewer also mentioned, sampling from the posterior at the beginning of each episode could marginally enhance the performance. While sampling at the beginning of episodes does not dramatically affect the BDQN performance, it provides additional information to the algorithm about the games settings which might cause a controversy in the fair comparison against other algorithms.\n\nWe apologize for the lack of a concrete definition of PSRL, and we agree with the reviewer that it should have been clearly defined. We added a clear definition of PSRL as well as a new algorithm block (now Alg2). The theoretical contribution in this paper suggests that if an RL agent follows the posterior sampling over weights w for exploration and exploitation trade-off, the agent\u2019s regret is upper bounded by \\tilde{O(d\\sqrt(T))}. Similar to the approach in Russo et al. 14 for linear bandits, if the prior and the likelihood are conjugates of each other and Gaussian, then BDQN is equivalent to PSRL. \n\u201cA side point: It is an interesting observation that in the proof of our theorems, we construct self-normalized processes which result in a Gaussian approximation of confidence. The Gaussian approximation of confidence also has been deployed for linear bandits in \u201cLinear Thompson Sampling Revisited\u201d. Therefore, the choice of Gaussian is well motivated.\u201d\n\nWe appreciate the comment by the reviewer on the O notation. We fixed it in the new draft.\n\nRegarding the Table 2: we added the vertical lines to both tables, the table 2 and the table 3.\n\nRegarding the safety discussion in A.8: We apologize if the statement was not clear in the discussion on safety. We added a detailed explanation in addition to a new figure for the proof-of-concept to clarify our statement. Generally, in the study of safety in RL, a RL agent avoids taking actions with high probability of low return. If the chance of receiving a low return under a specific action is high, then that action is not a favorable action. In appendix A.8 we show how the approaches studied in this paper also approximate the distribution over the return which can further be used for the safe exploration. \n\nTypos: Thanks for pointing it out. Also thanks to the reviewer 2, we addressed the typos in the new draft.\n\n\nRegarding the Distributional RL: The approach in Bellmare et al. 2017 approximates the distribution of the return (\\sum_t \\gamma r_t|x,a) rather the distribution (or uncertainty) over the expected return Q(x,a)=E[\\sum_t \\gamma r_t|x,a]. It is worth reminding that the mean of the return distribution is the Q function. Conceptually, approximating the distribution of the return is a redundant effort if our goal is to approximate the Q function. The approach in Bellmare et al. 2017 proposes first to deploy a deep neural network and approximate the return distribution, then apply a simple bin-based discretization technique to compute the mean of the approximated distribution, which is again the approximated Q function. Interestingly, Bellmare et al. 2017 empirically show that this approach results in a better approximation of the Q function. This approach is a variant of the Q-learning algorithm.\n"}, {"title": "General reply to reviewers", "comment": "Dear reviewers\nWe would like to thank the reviewers for taking the time to leave thoughtful reviews. Given these feedbacks, we have significantly improved the draft and hope the reviewers will take this into account when assessing the final scores. We appreciate the reviewers for the time and effort they dedicated to our paper. Please find individual replies to each of the reviews in the respective threads. Based on the reviewers' reviews and the comment by Ian Osband we revised the draft and uploaded the new version.\n"}, {"title": "Clarifications on the regret analysis.", "comment": "Dear Ian\nThank you for your interest in this work, and I appreciate your comments. They were helpful to improve the representation in the appendix. \n\nRegarding the structure of analysis, I followed the flow in Abbasi-Yadkori et al 2010 to make the theoretical contribution more transparent. Upon your feedback, I changed the presentation in the appendix and refactored pieces such that the appendix is more accessible. The Lemmas and their proofs are factored out from the main body of the theorem proof. More explanation in the components and more detailed comments on derivation are also provided. \n\na) PSRL-RLSVI: I believe the main confusion is due to the fact that I was not clear enough in stating that the theoretical analysis is for PSRL where the agent knows the exact prior and likelihood, and I apologize for the confusion. If the prior and likelihood are Gaussian, then BDQN (for the fixed representation up to some modification) is equivalent to PSRL; otherwise, BDQN approximates the posterior distribution with a Gaussian distribution, and the bound does not hold. I also added the PSRL algorithm block into the main text. In short, at the beginning of each episode, the algorithm draws a sample w_t from the w posterior distribution and follows w_t\u2019s policy. The prior and posterior need not to be Gaussian, rather known. \n\nb) Mapping from MDP to w: We consider a class of MDPs for which the optimal Qs are linear. By definition, given an underlying MDP, there exists a w^* (up to some conditions). Clearly, as you also mentioned, the mapping from MDP to w, in general, cannot be defined as a bijective mapping. Therefore, I would avoid saying w specifies the model and instead the other way, as it is also mentioned in the paper. In order to prevent any further misunderstanding, I explained it in more detail and also changed a few lines to clarify it the most. \nMoreover, in order to prove the theorem, there is no need to bring the model in the proof picture. I explained it through the model to ease the understanding of the derivation. I admit that I could explain the derivation in a better and clearer way. You can see that the same derivation can be done by adding and subtraction through Eq 6, the linear model. In order to prevent any confusion and directly carry the message, I wrote the derivation without bringing the MDP model into the proof picture in the new draft. Thanks for this comment, the current derivation without MDP model is now more transparent.\n\nc) High probability bound: When we use frequentist argument for a bound, we usually get high probability bound for either Bayesian or frequentist, e.g. \u201cMcAllester 1998\nSome PAC-Bayesian Theorems\u201d\nAs you know, when one substitutes \\delta with 1/T (or sometimes 1/T^2) we get log(T) instead of log(1/\\delta) in the bound as well as additional positive constant T/T Loss_max in the final bound. For example, your paper Osband et al 2013 and Russo et al 2014 follow the same argument. But the bound is not \u201cany time\u201danymore. In order to simplify the theorem, I set \\delta = 1/T to match the claim in Osband et al 2013. \n\nd) [General discount factor set] I apologize for the confusion. In the main text I talk about a discount factor of 1 (undiscounted), but in the appendix, I define the discount factor \\gamma to be in a closed set of [0, 1]. Please note that the upper bound on the discount factor is 1 and 1 is in the set, i.e., it contains \\gamma=1. So it should feel more general. For simplicity, I first derived the regret bound for \\gamma=1 then showed it is extendable to any \\gamma. I elaborated more in the new draft on how to extend the current analyses to any 0\\geq\\gamma\\geq 1. \n\n\nThank you for pointing out the typo in the in Lemma 2, I fixed that. \n\n\nThe new draft is re-organized and is much more accessible. I\u2019ll upload it to openreview when the website is open again. It would be great if you could look at it and point out the part which requires more clarification in the new draft. It would be again helpful to have your feedback on it. They are helping to improve the accessibility of the proof.\n\nSincerely yours\nAuthors\n\n"}, {"title": "A post-submission typo in the second paragraph of the introduction", "comment": "Dear reviewers and readers\n\nWe, unfortunately, noticed we made a typo post-submission in the second paragraph of the introduction section. In particular, this typo has appeared in the starting sentence, \n\"An alternative to optimism-under-uncertainty is Thompson Sampling (TS), a general sampling and randomizathttps://www.overleaf.com/1332425641pdxdghynmdhyion approach (in both frequentist and Bayesian settings) (Thompson, 1933).\" \nwhich should be replaced with \n\"An alternative to optimism-under-uncertainty is Thompson Sampling (TS), a general sampling and randomization approach (in both frequentist and Bayesian settings) (Thompson, 1933).\"\n\nWe have already addressed this issue in our draft and apologize for any inconvenience this causes. \n\nSincerely yours\nAuthors"}], "comment_replyto": ["HyxANvhHeE", "r1xz_N_kgN", "rkx5QOb_kE", "BJeXXaM30Q", "HJgYZgX30X", "HJezVrgmCX", "SkgdhF4KT7", "ByecO2R6j7", "HkehNOmf3X", "rygc6qQ82m", "B1gWLyCt27", "Hkgl5de63Q", "B1e7hs05Km", "rye7xYTnim", "B1e7hs05Km"], "comment_url": ["https://openreview.net/forum?id=B1e7hs05Km&noteId=B1g98Je8xE", "https://openreview.net/forum?id=B1e7hs05Km&noteId=HJe03g6WxE", "https://openreview.net/forum?id=B1e7hs05Km&noteId=r1xz_N_kgN", "https://openreview.net/forum?id=B1e7hs05Km&noteId=SklccwDh0X", "https://openreview.net/forum?id=B1e7hs05Km&noteId=B1eFHHDnAQ", "https://openreview.net/forum?id=B1e7hs05Km&noteId=H1lXah_90X", "https://openreview.net/forum?id=B1e7hs05Km&noteId=H1e2hKSRpm", "https://openreview.net/forum?id=B1e7hs05Km&noteId=SkgdhF4KT7", "https://openreview.net/forum?id=B1e7hs05Km&noteId=HkxtAs4Ka7", "https://openreview.net/forum?id=B1e7hs05Km&noteId=SkehU_4tp7", "https://openreview.net/forum?id=B1e7hs05Km&noteId=B1xfqv4Kp7", "https://openreview.net/forum?id=B1e7hs05Km&noteId=HJl3HrEYpX", "https://openreview.net/forum?id=B1e7hs05Km&noteId=HJgKVVEtTX", "https://openreview.net/forum?id=B1e7hs05Km&noteId=HkehNOmf3X", "https://openreview.net/forum?id=B1e7hs05Km&noteId=Bye-m-3UiX"], "meta_review_cdate": 1544722422099, "meta_review_tcdate": 1544722422099, "meta_review_tmdate": 1545354509686, "meta_review_ddate ": null, "meta_review_title": "A neat idea with impressive results but has technical flaws and issues with clarity", "meta_review_metareview": "There was a significant amount of discussion on this paper, both from the reviewers and from unsolicited feedback. This is a good sign as it demonstrates interest in the work. Improving exploration in Deep Q-learning through Thompson sampling using uncertainty from the model seems sensible and the empirical results on Atari seem quite impressive. However, the reviewers and others argued that there were technical flaws in the work, particularly in the proofs. Also, reviewers noted that clarity of the paper was a significant issue, even more so than a previous submission. \n\nOne reviewer noted that the authors had significantly improved the paper throughout the discussion phase. However, ultimately all reviewers agreed that the paper was not quite ready for acceptance. It seems that the paper could still use some significant editing and careful exposition and justification of the technical content.\n\nNote, one of the reviews was disregarded due to incorrectness and a fourth reviewer was brought in.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper699/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1e7hs05Km&noteId=BJe0xdMggE"], "decision": "Reject"}