diff --git "a/conferences_annotated/token_level/review_sen_per_line.tsv" "b/conferences_annotated/token_level/review_sen_per_line.tsv" new file mode 100644--- /dev/null +++ "b/conferences_annotated/token_level/review_sen_per_line.tsv" @@ -0,0 +1,1402 @@ +sentence_id sentence labels topic +graph20_25_2_0 The submission presents evaluation of BendyPass, a prototype based on Bend Passwords design [33], with visually impaired people. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_1 The prototype is a simplified version of Bend Passwords [33] geared towards users who are visually impaired. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_2 The evaluation consisted of two sessions (taking place one week apart) in which participants first created their passwords and then used them to sign in. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_3 The experiment compared BendyPass with standard PIN security feature on touchscreen devices. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_4 The results show that although it took longer for participants to create their passwords with BendyPass, they were able to recall and enter them quicker with BendyPass than with PIN. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_5 This submission contributes new knowledge about how users who are visually impaired can enter passwords ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_25_2_6 The main strength of the paper is the experimental user study design with users who are visually impaired ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_25_2_7 It is particularly important to evaluate technology with target stakeholders ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_25_2_8 The paper is well written : the work is motivated well, the related work is mostly comprehensive, and the design and evaluation sections are clear and have enough detail for others to attempt to reproduce/replicate the study ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_25_2_9 However, there are two main weaknesses: 1) the submission narrowly focuses on bend passwords, and 2) the evaluation compares BendyPass against only one baseline ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_25_2_10 The paper never justifies why Bend Passwords [33] is the best design to adapt for users who are visually impaired ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_25_2_11 There are many other potential designs out there and the paper does not fully explore the potential design space before picking Bend Passwords [33]. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_12 For example, an equally feasible alternative is a design that uses a small physical numerical keyboard that users can carry with them and enter passwords even from their pockets (the haptic feedback that such a keyboard would enable would allow such interaction). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_13 Such alternative design is similar to BendyPass along many dimensions (e.g., users need to carry an additional device, but offers a more familiar interface). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_14 Other designs exist (e.g., work by Das et al. (2017) is just one example. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_15 Thus, the paper should better position the proposed design/prototype within this design space ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_25_2_16 This brings up another issue: the PIN baseline is the current de facto standard, but other baselines (e.g., physical PIN from the previous paragraph) would position the work better and help justify use of BendyPass very different and unfamiliar interaction modality ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_25_2_17 Also, entering PIN on touchscreen devices is notoriously difficult for people who are visually impaired, so it is no wonder that BendyPass outperforms it ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_25_2_18 Thus, ideally the evaluation would compare other ways that participants can enter PIN passwords ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_25_2_19 In summary, this is an interesting paper that will contribute to the GI community. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_20 Thus, I look forward to seeing this paper as part of the program. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_21 REFERENCES Sauvik Das, Gierad Laput, Chris Harrison, and Jason I. Hong. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_23 Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_24 In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI 17). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_25_2_25 Association for Computing Machinery, New York, NY, USA, 37643774. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_26_3_0 Thank you for submitting a revised version of this submission, and addressing concerns raised in the previous round of reviews. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_26_3_1 I reviewed the previous submission as R2. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_26_3_2 pseudo-url The submitted modifications show a marked improvement in the exposition of the work. ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_26_3_3 In particular, clarifications around the motivation behind the path tracing task, and additional related work that have utilized path tracing to determine endpoints (e.g., [17], [18]) and to mark or detect features along a path (e.g., [66]) were helpful in positioning the contributions of this work in relation to prior work ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_26_3_4 I am satisfied with the changes in the modified manuscript , and changing am my recommendation to accept. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_26_3_5 However, I noted that there are several typos throughout the text , and I recommend a thorough editing pass for the camera ready ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_26_3_6 For example, page 3: HoloLense -> Hololens. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_0 Through four studies, this paper proposes to lift a theoretical limitation in the application range of the Dual Gaussian Distribution Model, namely that it could also work when touch acquisition occurs from a touchscreen to that same touchscreen. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_1 This paper is well written and shows good experiment design and consistent analyses ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_29_3_2 However I found the theoretical argument to use the DGDM in screen-to-screen pointing quite hard to follow , even though it is the main point of this article. ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_3 I also have a number of concerns that I would like to see addressed in a revision ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_4 BLAMING AGE Honestly, I found it quite a weak argument to put the lack of generalization of the approach on age (p. 10 ). ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non'] paper quality +graph20_29_3_5 Age difference is one among many possible explanations, but one in which this paper rushes in nevertheless, at the expense of any other ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_6 The paper doesn't even acknowledge that this lack of success could simply be due to a lower external validity than the authors hoped for ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_7 "As the authors state themselves p. 9, ""A common way to check external validity is to apply obtained parameters to data from different participants.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_8 Checking can also come up negative, and that is ok. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_9 These results remain valid , even if the proposed approach is not as context-independent as hoped ['pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_10 "Perhaps worse, the paper immediately jumps from this patched-together explanation, straight to calling it a ""novel finding"", and then to suggesting design guidelines from it, as if it was now a proven fact" ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_11 I think this part needs to be drastically shortened or even removed, in favor of a more realistic discussion about generalization---and possible lack thereof ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_12 "UNLIMITING"" I found it quite hard to understand the point of Bi et al. for rejecting screen-to-screen pointing, at least the way it is explained in this paper" ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_13 "That, in turn, makes it quite difficult to understand the counter-argument developed in this paper---and especially since ""The evidence comes from a study by Bi et al ."" (p. 4), which makes one wonder why Bi et al. put that ""limitation"" up in the first place" ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_14 One example, in the last paragraph before EXPERIMENTS (p. 4), a point is made that goes like this: - a lack of effect might be due to A values that are too close to each other, - even if A should in fact have an effect according to some model (Eq. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_15 12), - and for some reason that makes it ok to consider that screen-to-screen pointing is compatible with Bi et al.'s model (which does not consider A). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_16 DESIGN APPLICATIONS I am not sure that the possible applications of this model are well described or argued for in this paper ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_17 The described examples feel rather artificial ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_18 In the example given in p. 1 (choosing between 5 or 7-mm circular icons), it is unclear why the designer would need a model, or to know by how much a 7-mm icon would improve accuracy ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_19 It seems that this sort of design issues can be solved using threshold values under which users simply cannot accurately acquire a target ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_20 I assume that strong design guidelines already exist for this? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_21 Similar argument about the second and third paragraphs in p. 9 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_22 "The level of detail argued here seems quite artificial , e.g. ""If designers want a hyperlink to have a 77% success rate""." ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_23 I doubt many designers would consider a clickable, 2.4-mm high font or icon on a touch screen in any case. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_24 I might be wrong. ['non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_25 "by reducing the time and cost of conducting user studies, our model will let them focus on other important tasks such as visual design and backend system development, which will indirectly contribute to implementing better, novel UIs.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_26 "p. 2) That seems quite a stretched ""contribution"", at least in the absence of actual data about how long designers do spend on testing width values today" ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_27 AMOUNT OF ERROR Throughout the paper, prediction errors (additive) up to 10% are described as small, and that is surprising (5% in Exp 1, 10% in Exp 2, 7% in Exp 3, 10% in Exp 4). ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_28 To the best of my understanding, these are not percentages of prediction error (e.g. going from 50 to 55 is a 10% increase), which would be more ok. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_29 These are differences between values that are already expressed in percents. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_30 In my experience, many pointing studies have error rates ranging from 0 to, say, 15%, perhaps more when the tasks or input devices make it particularly difficult. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_31 2-mm targets on a touch device could definitely count as difficult. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_32 However, that still makes a 10% prediction error quite high in my book, and worthy of contextualization. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_33 Perhaps I misunderstood something. ['non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_34 the error rate difference was |29 38| = 9%. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_35 "Similarly, their 2D tasks showed only small differences in error rate, up to 2% at most.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_36 "First, for a metric that can often be between 0 and 15%, 2 and 9% are not ""similar"" values" ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_37 Second, 29% and 38% error seems alarmingly high ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_29_3_38 CLARITY Removing tap points that are further than a fixed distance away from the target center will likely affect W levels differently. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_39 I imagine that more of these errors occurred in the W=10mm condition. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_40 This would be good to report , either way, even though only a small number of trials was removed overall. ['con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_29_3_41 Fig. 12 should also show the actual success rates measured in these studies. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_35_1_0 This paper presents QCue, a tool to assist mind-mapping through suggested context related to existing nodes and through question that expand on less developed branches, including two studies, a detailed description of the algorithm design, and rater evaluation of their results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_35_1_1 The first study explores how users respond to new node ideas suggested by the tool and whether that creates more detailed maps. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_35_1_2 The second study expands on those findings to balance the depth and breadth of mind maps creation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_35_1_3 Both studies compare the new mind mapping tool to digital options without computer assistance. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_35_1_4 They find that QCue produces more balanced and detailed mind maps and that some mind mapping tasks may be better suited to this type of computer intervention than others. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_35_1_5 Overall, this paper is an interesting exploration of a novel area of computer supported brainstorming ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_35_1_6 The two studies are well-described and designed studies ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_35_1_7 The level of detail in the algorithm description is a particular strength, giving a clear picture of how it works and why those choices were made ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_35_1_8 One small point that could be clarified is why a between subjects design was chosen over a counterbalanced within subjects ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_35_1_9 Finally, the discussion would benefit from some more general discussion, before the limitations, on the overall findings and what they mean for mind mapping and similar applications moving forward ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_35_1_10 The results are individually compelling , but what does it mean all together ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_35_1_11 This research is well-written and a good contribution to the area of brainstorming , and it would be interesting to get more of a complete sense of the results ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_0 This paper presents a projection system to help unexperienced people to draw latte art on a cappuccino. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_1 There is a user study comparing participants performance with the system, and with watching explanatory videos only. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_2 The results suggest that participants perform better with the system. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_3 This is overall an interesting idea of interactive system supporting skill acquisition ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_36_1_4 The system remains simple ['pro', 'pro', 'pro', 'pro'] paper quality +graph20_36_1_5 This will not be a revolution , but it might be of interest ['con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_36_1_6 To begin with, there is little details about the design rationale ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_7 What are the design choices ['con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_8 The system does not seem to follow a particular rationale ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_9 The fact that participants complained about the lack of information about syrup pouring reveals that this is more a trial and error approach than an informed design procedure ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_10 There is no clue about scalability neither ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_11 To which extent the system supports other patterns ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_12 For example between the hears and the leaf the syrup is either a series of dots or a continuous line. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_13 This inevitably has an effect on syrup pouring. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_14 Are there other patters with features not presented in these three ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_15 Looking at table 1 makes me think these instructions are quite clear on how to make these 3 patterns. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_16 I wish there was a condition with these schematics only ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_17 But it also makes me think about the actual difficulty of performing such art (I never tried myself). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_18 I expected more discussion on this point in the paper ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_19 It would have been a good start for a design rationale ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_20 The experiment procedure give little details about participants background ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_21 How did authors ensure homogeneity of the groups ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_22 Last, I would like to talk about the results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_23 First of all I am unsure a pixel comparison metric is fair ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_24 The projection method inevitably show the precise spot for pouring syrup. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_25 But in the other condition, participants could have perform just as well, with a slight rotation or translation ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_26 This might have affected the metric, with no real impact on the perceived result. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_27 The discussion mentions participants who felt the drawing were similar while the metric showed they were not ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_28 What is the objective : people's perception or a metric? ['con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_36_1_29 Also, how many times could participants practice ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_30 The results presented in appendix do not seem so different , and I think the result will be even more similar with a little practice ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_36_1_31 In summary, the idea is interesting , but the design rationale is unclear, and it is unclear the results justify using this system ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_0 The paper explores the possibilities of reviewing and visualising patient-generated data from a range of stakeholders consisting mainly of healthcare providers and patients. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_1 The authors utilise a range of methods in order to better understand the attitude and perspective of both participants to provide relevant and appropriate design insights for developing tools to support the visualisation of data collected during a clinical visit. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_2 First, the authors attempted to identify a gap in the literature concerning how visualisation designs can support the review and analysis of user-generated data. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_3 What is missing is a clear articulation of the research problem and question within the literature provided ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_4 It read like some form of a haphazard account of few studies that point to the relevance of tracking and visualising patient data in order to inform better health decisions, and ultimately a better lifestyle ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_5 Although sections 2 attempts to situate the research question into the context of varied perspectives , a better justification of the stake for the field would have been made clearer had it being the section doesn't read as if its an analysis of prior data, and not of related works ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_39_2_6 Second, I particularly appreciate the authors' use of different methods (focus group, interviews, and observation) but fail to see an understanding of the needed sensitivity towards participants with some form of a chronic condition ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_7 Its well known that 'chronic conditions might take a different form and thus interpreted within a particular context; this makes the contribution of the paper marginal, as one would expect a clear articulation of how the method is chosen to fit into the context of the wider literature on similar issues and ultimately the nature of the study participants ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_8 We need more detail to determine whether what the data suggest reflect the subjective perspective of the different users that participated in the study ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_9 Thirdly, from the discussion of the findings, quotes appear unpacked ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con'] paper quality +graph20_39_2_10 How representative is it , whats the bigger picture , can it be generalised to other not known scenarios ['con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_11 The analysis of the patient's interview provided a bigger picture of the different perspectives, and which makes the different factors more relational and understandable. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_12 Overall, the analysis lacks clarity, rigour and situated in the literature ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_13 With a few grammatical typos, it reads as a thread of different perspective, with little grounding in HCI and related field ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_14 Lastly, in HCI, there is a movement towards ideas about participatory design, user-centred design, value-sensitive design and so on. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_15 A utilisation of these perspectives in framing the research ideas would have done more good to the paper than proposing a new design space for visualisation of user-generated data ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_16 From the guidelines outlined in section 9, it is hard to pinpoint new learning that the paper provides to the visualisation of subsequent design practices apart from restating well-known design insights ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_17 There is the question of how the data and the proposed guidelines might bring about some implications for design (Dourish, 2006) and practice. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_18 Although the issues of implication for design has been misunderstood and widely misrepresented, what the proposed design guideline sought to point to might be regarded as some form of outlining implications for a design practice that is minimal and non-representative ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_2_19 This makes the paper weak, lacking impactful significance , and thus leaning would not argue strongly towards acceptance. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_2_20 I would encourage the authors to situate the research questions into the broader literature and determine whether they fit into some of the well-established methods informing the designing of health-related technologies . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +graph20_39_3_0 This paper describes the exploration of designing data visualizations of daily medical records by patients, and what kinds of visualizations may assist providers in best keeping track with an patients medical status. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_3_1 The authors perform three phases: An interview with providers to assess their needs, sessions with patients to gather their unique medical history and develop several visualizations for their data, and going back to providers with these visualizations to gather their ideas of how well these visualizations would assist them. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_3_2 The authors then suggest some design guidelines at the end for developing usable patient data visualizations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_3_3 I enjoyed the paper ['pro', 'pro', 'pro', 'pro'] paper quality +graph20_39_3_4 It is a qualitatively-driven paper , but I believe it provides much insight into what providers would like in patient visualizes, and takes into account how patients already record their information ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_39_3_5 The writing is clear and the paper is easy to read ['pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_39_3_6 There are a few comments I have about the paper that I describe below. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_3_7 The description of each patient drags on a little long , and much of it does not become useful after in the later sections , since particular medical history is not referenced in later sections. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_3_8 While identifying the uniqueness of each patients medical conditions and how/why they record information is important , I think this could be greatly shortened to the most pertinent points to demonstrate the differences ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_3_9 I would have also liked to see some of the images of the visualizations for myself ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_3_10 Another concern I have is about the disparity between the emphasis on how each patients medical history (and in turn, visualization) is unique, and then the proposal of general design guidelines for creating patient visualizations ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_3_11 It seemed that the initial statement was that general guidelines were not useful because of the uniqueness at each patient. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_39_3_12 I would have liked a little more discussion on the limitations of the authors proposed guidelines at the end and how did or did not mitigate this issue ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_39_3_13 I think these changes/clarifications can be made easily, and therefore I would argue for the acceptance of this paper pending these changes. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_43_1_0 "This paper presents two variations of the standard Fitts' law study, to understand the effect of (1) a situation where targets initially appear with a given size (called the ""visual width"" in the paper) but are revealed to have a larger clickable size revealed once the cursor gets close (called the ""motor width"") or vice versa; and (2) different gaps between targets arranged side-by-side." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_43_1_1 Models are fit which account for these differences, on both new data gathered from 12 participants, and data sets gathered from several past studies. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_43_1_2 Overall, I found the design of the study to be sound , as is the data analysis and modeling methodology. ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_43_1_3 I also think that the overall motivation of understanding whether interfaces with distinct visual and motor widths (to use the paper's terms) is interesting ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_43_1_4 Despite the above, I am not very enthusiastic about this paper ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_5 While I appreciate the overall motivation , I'm not sure if a Fitts' law study is the right approach for going about understanding the effects of these kinds of interfaces ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_6 Or, put in a different way, I'm not sure if the study results are all that valuable for designers (given that it's looking at 1D pointing), or whether this type of interface is common enough that it's useful to have a new Fitts' law formula to account for it ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_7 The situation in which motor width differs from visual width seems fairly niche overall , and the examples cited in the introduction where visual width is greather than motor width seems like a situation that will almost always be due to poor interface implementation, rather than a conscious design decision ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_8 "In addition to the above concerns about the contribution of the paper, the term ""motor size"" is already used in Blanch et al .'s CHI 2004 work to refer to the situation where the control-display gain is manipulated to create objects with a higher or lower size in motor space as compared to their visual space on screen, work which is not cited in this paper" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_9 It seems awkward to use such a similar term here, when C-D manipulation is not the focus ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_10 Finally, I found the study results to be difficult to interpret , as many of the results subsections are ANOVA output with little interpretation and commentary to help the reader understand what was found ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_43_1_11 Based on the above, I feel the paper is marginally below the acceptance threshold. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_45_2_0 The paper proposes a new visualization scheme that combines the properties of scatterplots and parallel coordinates plots (PCPs): the Cluster-Flow Parallel Coordinates Plot (CF-PCP). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_45_2_1 The visualization represents clusters of data points in multivariate data by duplicating axes from the canonical PCP visualization to represent 2D subspaces of the multivariate data. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_45_2_2 This approach preserves the readability of correlational patterns from the original PCP while making cluster assignments more obvious than alternatives relying on edge bundling and on just the use of line color. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_45_2_3 The implementation of the proposed visualization requires tackling several interesting aspects including a scheme to connect lines between duplicated axes by drawing Hermite spline segments that preserve the line slopes at the axes and a layout optimization based on an A* algorithm to compute the shortest path ordering of duplicated axes. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_45_2_4 The results are demonstrated on several example datasets and contrasted against visualizations using traditional PCP and scatterplots. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_45_2_5 This is a nice paper that I believe proposes and novel and useful visualization scheme ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_45_2_6 However, there is one key weakness which prevents me from being more positive with respect to acceptance : an evaluation of the proposed visualization in practical use through a user study is absent ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_45_2_7 The benefits of the visualization are only demonstrated through qualitative results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_45_2_8 The paper would have been significantly stronger if the expected benefits were measured in a practical scenario ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_53_2_1 is a tool for authoring object component behaviour within VR. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_2 With this, users can select part of a VR object, assign an animation behaviour, and preview it. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_3 The tool is a very useful and novel contribution , although I have some questions about the validity of the use case scenario ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_53_2_4 "The system requires that the virtual objects are implemented in a way that they do not only present an outside facade but also contain primitives of its components not displayed on the outside (i.e., ""internal faces"")." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_5 This is briefly addressed in the limitations , but I would have found some discussion of this aspect very helpful, especially earlier when introducing the research motivation ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_53_2_6 "How likely are designers of 3D objects to include such ""internal faces""; is this common?" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_7 The paper further assessed the tool in an exploratory study looking at usability and induced workload, with promising results ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_53_2_8 This consisted of a small user study (N=16) featuring qualitative and quantitative measures. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_9 The latter assessed usability (SUS) and workload (NASA TLX) and custom miscellaneous items. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_10 Some issues in the study reporting: - What was the scale range for the prior experience questions ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_53_2_11 "The quantitative data is described as ""qualitative"" for some reason, even when referring to barplots in Figure 9." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_12 "Finally we saw a high rating for the perception of realism and feelings of immersion in the environment (Q10) ( = 5.88, = 0.78).""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_13 Q10 only refers to realism - where is the immersion aspect coming from here? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_14 For some reason, the actual qualitative aspects of the study are then reported as a subsection in the discussion (6.3 - Comment Observations). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_15 I strongly recommend that this be moved to a subsection of the previous section , i.e., the Results section. ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_16 The actual discussion of the results unfortunately is very limited (especially because large parts of it consist of qualitative reporting), and are mostly a summary, rather than a contextualization of the results within existing work, or statements on implications of the results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_53_2_17 The paper does discuss limitations , but I think that this section should also address the fact that the study was largely preliminary / exploratory in nature ; there was no comparison condition, nor a discussion of what a baseline condition might look like for this context ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_53_2_18 Despite these weaknesses with regards to the study reporting and discussion, the paper is interesting and showcases good and novel work and I think the GI community would benefit from its presentation (albeit with some changes as suggested above). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_53_2_19 "General minor issues: - ""users authoring process"" -> ""users' authoring process""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_0 The authors describe the design and implementation of a shape-based brushing technique targeted at selecting a particular type of data - trajectories. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_1 "These are notoriously difficult to select directly due to issues of occlusion and the ""hairball"" effect when there are many trajectories intertwines, as is the case with eye tracking, network, or flight trails data." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_2 The authors do an excellent job of describing the problem and grounding the approach in previous work ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_56_1_3 The approach is interesting and the use cases described demonstrate the technique well ['pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_56_1_4 However, the paper is weakened by several writing and organizational aspects, and by an odd off-hand report of user feedback ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_5 The basics of the technique are well-described : the user draws a shape that the system then selects matches for, based on two similarity metrics (one calculated by Pearson's coefficient and the other by a PCA algorithm). ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_6 As these two metrics deliver different candidates, the resulting set of trajectories is provided to the user in a set of small multiples illustrating the selected trajectories and sorted by similarity; the user can refine selections, although it was not clear how ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_7 There appears to be a set of small multiples for each of the two metrics. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_8 One main weakness of the paper is manifested here: I found the description of the bins, and how they are calculated, quite confusing ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_9 I had to re read the paper back and forward to finally tease out what I think is the way it works ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_10 Overall, the writing and the organization of the paper suffered from similar issues ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_11 A similar problem occurred with a critical aspect of the brushing technique : direction. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non'] paper quality +graph20_56_1_12 The authors state directionality is a critical advantage of their brushing technique, but never actually stipulate how direction is specified in the original share definition ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_13 I assuming - as one would consider the obvious choice - that directionality is taken from the direction of the sketched brush at the time the user draws it. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_14 But this is not clear ['non', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_15 IN fact, the whole way the user draws the shape is poorly described ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_16 The nice video provided was helpful in showing this technique ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_56_1_17 However, the video alludes to something not mentioned in the paper about directionality : only the Pearson algorithm identifies direction, and even from the video it was not clear how the user selected it ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_18 These critical areas of confusion around how the process actually unfolds from start to finish should have been more clearly described ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_19 I found it odd that at the authors retained both metrics, delivering different results, without trying some blended version that might reduce complexity for the user ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_20 One would expect that trying some combination would be an obvious step , especially given the unclear feedback from the expert review. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_21 The last point leads me to what I see as *the* major weakness of the paper. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_22 Having reviewed this approach with experts, the authors state that the experts did not get it, and so they choose to describe the system with a use-case method. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_23 In fact, this reads as if the feedback from the experts was so bad that they did not want to describe it. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_24 Why dont they include the feedback ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_25 Surely they found out useful information. ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_26 It sounds like a classic case of theres nothing wrong with our system, just change the user ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_56_1_27 Because of that last point, I am somewhat on the fence about this paper, but am willing to consider that it might be acceptable. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_56_1_28 Id like to see an inclusion of the user review. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_0 This submission reports on the creation of a system to help medical residents and their reviewers to assess their learning using an information visualization dashboard, designed for and with them in a participatory process, deployed in their setting, and evaluated with them through a longitudinal study. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_1 Quality The methodology employed for conducting this research sources methods from diverse fields and is relevant ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_2 Clarity The presentation is very clear, with pertinent textual and visual explanations ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_3 Originality The review of related work is varied across relative disciplines and well positioned ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_4 Signifiance The system has been designed and developed and evaluated so that it ended up being useful to domain experts (medical residents and their reviewers) ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_5 I advocate for accepting this submission. ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_6 ABSTRACT Abstract provides information that is ideally expected : one sentence of context, summary of contribution, explanation of system and methodology. ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_7 "I would suggest to use active voice instead of passive to clarify who contributed what (""The system was developed"", ""...was installed"")" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_61_2_8 INTRODUCTION The motivation and context is sound, with references on how information visualization and dashboards support learning analytics or educational data visualization ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_9 The proposed methodology of design and development relies on well established practices: eliciting requirements through focus groups, designing using action design research framework, implementation through agile development, evaluating the system through uncontrolled longitudinal studies and feedback sessions ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_10 Obtained results are supported with clear metrics ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_11 RELATED WORK The related work is well balanced with a review on visualization dashboards and visualization in medical training with references from diverse related research communities ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_12 "One reason for this gap seems to be the lack of collaboration among the developers, end-users and visualization experts.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_13 The passive voice of the sentence does not help to identify who posited this reason : the authors of the submission or Vieira et al. [36]? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_14 "Also, before initiating collaborations, I would say that all parties must first be aware of each others contributions, so I would rephrase the reason as a ""lack of communication"" among them" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_61_2_15 APPLICATION BACKGROUND This section conveniently introduces domain-specific terms and thus contributes to make the paper standalone in understanding the context ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_16 Requirement analysis was conducted through focus groups including active participation of domain experts (including involving them in sketching their desired features for data presentation) ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_17 Data characterization is assorted with visibly clear understanding and explanation of the domain ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_18 Q1 can be reformulated with plural to avoid gender bias (so that this is harmonized with similar efforts along the paper) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_61_2_19 VISUALIZATION DESIGN The rationale for visualization design is clearly explained and illustrated ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_20 The choice for visualizing rotation schedules using an interval chart rather than a more space-consuming Gantt chart widespread in time/project management is smart ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_21 The decisions on color scales adjustments to highlight under-performance while shadowing over-performance on EPA count per rotation is well motivated by contextual needs ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_22 Figure 4: I would suggest to split the figure into 2 rows (3.5 and 3.6) and annotate columns in black font over white paper background, instead of white font over blue application background : with a low zoom level on my PDF reader, I had first confused these annotations with potential widgets in the application ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_61_2_23 "For further inspiration on visualization for comparing (resident) profiles, I'd suggest to browse other works by Plaisant et al. in addition to [29] : pseudo-url pseudo-url IMPLEMENTATION DETAILS The implementation details report on constraints that may be too project-specific (with occurrences of ""project"" or ""the University"") and would gain to be generalized" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_61_2_24 "Congratulations for opensourcing the code to potentially help other institutions with medical programs (""across Canada"", or beyond?)." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_25 The responsive design choice is great for multiple device access with various form factors ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +graph20_61_2_26 Rendering in SVG with d3 might pose issues regarding accessibility, where efforts for compliance are left at the discretion of application developers rather than library developers ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +graph20_61_2_27 See pseudo-url USER EVALUATION AND FEEDBACK The user evaluation and feedback proposes analysis of user logs that informed changes in metrics for measuring improvement in learning program once their system was adopted by residents and reviewers; and their feedback. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_28 "I would suggest the following references to inform analysis of user logs : - H. Guo, S. R. Gomez, C. Ziemkiewicz and D. H. Laidlaw, ""A Case Study Using Visualization Interaction Logs and Insight Metrics to Understand How Analysts Arrive at Insights,"" in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 51-60, 31 Jan." ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_29 2016. doi: 10.1109/TVCG.2015.2467613 - Papers from the IEEE VIS'16 Workshop: Logging Interactive Visualizations & Visualizing Interaction Logs pseudo-url DESIGN CHOICES AND INSIGHTS GAINED I found the design considerations to be mostly obvious and known to designers and developers of user interfaces and information visualization. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_30 LIMITATIONS AND FUTURE WORK The limitations are mainly focused on the specificity of project requirements to one University in Canada, the small sample size of participants to evaluations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_31 SUPPLEMENTARY VIDEO The video introduces the application domain and showcases diverse tasks supported by the tool presented in the submission. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +graph20_61_2_32 Audio quality of the voice over could be improved with a proper microphone and recording settings. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1049_1_0 This work proposes a variant of the column network based on the injection of human guidance. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1049_1_1 The method does not make major changes to the network structure, but by modifying the calculations in the network. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1049_1_2 Human knowledge is embodied in a defined rule formula. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1049_1_3 The method is flexible and different entities correspond to different rules ['pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1049_1_4 However, the form of knowledge is limited and simple ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1049_1_5 Experiments have shown that the convergence speed and results are improved, but not significant ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1049_1_6 "Minor Example 2: ""A"" -> ""AI""" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_0 This paper discusses State Representation Learning for RL from camera images. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1091_1_1 Specifically, it proposes to use a state representation consisting of 2 (or 3) parts that are trained separately on different aspects of the relevant state: reward prediction, image reconstruction and (inverse) model learning. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1091_1_2 The paper is easy to read, and seems technically sound ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1091_1_3 However, the conclusions do not directly follow from the results, so should be made more precise ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_4 The contribution is minor, and the reasoning behind it could be better motivated ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_5 The most important point of critique is that the conclusion that the split representation is the best is at best premature ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_6 The presented results indicate that SRL is useful (Table 1), and that auto-encoding alone is often not enough ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1091_1_7 Other than that, the different approaches tested all work well in different tasks ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1091_1_8 The discussion of the results reflects this, but the introduction and conclusion suggest otherwise ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_9 The same problem also occurs for the conclusion about the robustness of SRL approaches ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_10 In the main text, no results are presented that warrant such a conclusion ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_11 The appendix includes some tests in this direction , but conclusions should not be based on material that is only available in the appendix ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_12 Furthermore, even the tests in the appendix are not comprehensive enough to to warrant the conclusion as written ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_13 "The second point is the motivation of the split approach: it seems in direct contradiction with the ""disentangled"" and ""compact"" demands the authors pose" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_14 Because the parts of the state that are needed for multiple different prediction tasks (reconstruction, inverse model, etc.) need to be in the final state representation multiple times. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1091_1_15 Due to the shared feature extractor, the contradictory objectives (and hence the need for tuning of the weights in the cost function) are still a potential problem ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_16 Minor points: - The choice for these tasks is not motivated well ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_17 Please indicate why these tasks are chosen. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1091_1_18 It seems the robot arm task is very similar to the navigation task, due to robot arm's end effector being position controlled directly ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_19 Why is it worthwhile to study this task separately ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_20 The GTC metric is not very well established (yet) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_21 Please provide some extra information on how it is calculated. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1091_1_22 This should also include some discussion on why this metric allows judging sufficiency and disentangledness ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_23 How would rotating the measurement frame of the ground-truth influence the results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1091_1_24 Why are the robotics priors not in Table 1? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_0 This work is an extension to the work of Sukbaatar et al. (2016) with two main differences: 1) Selective communication: agents are able to decide whether they want to communicate. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_1 2) Individualized reward: Agents receive individual rewards; therefore, agents are aware of their contribution towards the goal. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_2 These two new extensions enable their model to work in either cooperative or a mix of competitive and competitive/collaborative settings. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_3 The authors also claim these two extensions enable their model to converge faster and better. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_4 The paper is well written, easy to follow, and everything has been explained quite well ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1291_3_5 The experiments are competent in the sense that the authors ran their model in four different environments (predator and prey, traffic junction, StarCraft explore, and StarCraft combat). ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_6 The comparison between their model with three baselines was extensive ; they reported the mean and variance over different runs. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_7 "I have some concerns regarding their method and the experiments which are brought up in the following: Method: In a non-fully-cooperative environment, sharing hidden state entirely as the only option for communicate is not very reasonable ; I think something like sending a message is a better option and more realistic (e.g., something like the work of Mordatch & Abbeel, 2017) Experiment: The experiment ""StarCraft explore"" is similar to predator-prey; therefore, instead of explaining StarCraft explore, I would like to see how the model works in StarCraft combat" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_8 Right now, the authors explain a bit about the model performance in Starcraft combat, but I found the explanation confusing ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_9 Authors provide 3 baselines: 1) no communication, but IR 2) no communication, no IR 3) global communication, no IR (commNet) I think having a baseline that has global communication with IR can show the effect of selective communication better. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1291_3_10 There are some questions in the experiment section that have not been addressed very well ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_11 For example: Is there any difference between the results of table 1, if we look at the cooperative setup ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_12 Does their model outperform a model which has global communication with IR ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_13 Why do IRIC and IC work worst in the medium in comparison to hard in TJ in table1 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1291_3_14 Why is CommNet work worse than IRIC and IC in table 2 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1333_1_0 This paper proposes a new set of heuristics for learning a NN for generalising a set of NNs trained for more specific tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1333_1_1 This particular recipe might be reasonable , but the semi-formal flavour is distracting ['con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1333_1_2 The issue of model selection (clearly the main issue here) is not addressed ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1333_1_3 A quite severe issue with this report is that the authors don't report relevant learning results from before (+-) 2009, and empirical comparisons are only given w.r.t. other recent heuristics ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1333_1_4 This makes it for me not possible to advice publication as is. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1399_1_0 In my opinion this paper is generally of good quality and clarity, modest originality and significance ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1399_1_1 Strengths: - The experiments are very thorough ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1399_1_2 Hyperparameters were honestly optimized ['pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1399_1_3 The method does show some modest improvements in the experiments provided by the authors ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1399_1_4 The analysis of the results is quite insightful ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_1399_1_5 Weaknesses: - The experiments are done on CIFAR-10, CIFAR-100 and subsets of CIFAR-100. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1399_1_6 These were good data sets a few years ago and still are good data sets to test the code and sanity of the idea, but concluding anything strong based on the results obtained with them is not a good idea ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_7 The authors claim the formalization of the problem to be one of their contributions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1399_1_8 It is difficult for me to accept it ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1399_1_9 The formalization that the authors proposed is basically the definition of curriculum learning ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_10 There is no novelty about this ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_11 The proposed method introduces a lot of complexity for very small gains ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_12 While these results are scientifically interesting , I don't expect it to be of practical use ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_13 The results in Figure 3 are very far from the state of the art ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_14 I realize that they were obtained with a simple network, however, showing improvements in this regime is not that convincing ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_15 Even the results with the VGG network are very far from the best available models ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_16 I suggest checking the papers citing Bengio et al. (2009) to find lots of closely related papers. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_1399_1_17 In summary, it is not a bad paper , but the experimental results are not sufficient to conclude that much ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_1399_1_18 Experiments with ImageNet or some other large data set would be advisable to increase significance of this work . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr19_242_2_0 This paper tested a very simple idea: when we do large batch training, instead of sampling more training data for each minibatch, we use data augmentation techniques to generate training data from a small minibatch. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_1 The authors claim the proposed method has better generalization performance. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_2 I think it is an interesting idea , but the current draft does not provide sufficient support ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_4 The proposed method is very simple ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_5 In this case, I would expect the authors provide more intuitive explanations ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_6 It looks to me the better generalization comes from more complicated data augmentation, not from the proposed large batch training ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_8 It is unclear to me what is the benefit of the proposed method ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_9 Even provided more computing resources, the proposed method is not faster than small batch training ['non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_10 The improvement on test errors does not look significant ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_11 If given more computing resources, and under same timing constraint, we have many other methods to improve performance. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_12 For example, a simple thing to do is t0 separately train networks with standard setting and then ensemble trained networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_13 Or apply distributed knowledge distillation like in (Anil 2018 Large scale distributed neural network training through online distillation) 3. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_14 The experiments are not strong ['con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_15 The largest batch considered is 64*32, which is relatively small ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_16 In figure 1 (b), the results of M=4,8,16,32 are very similar, and it looks unstable ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_17 It is unclear what is the default batchsize for Imagenet ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_18 In Table 1, the proposed method tuned M as a hyperparameter. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_19 The baselines are fairly weak , the authors did not compare with any other method ['con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_20 I would expect at least the following baselines : i) use normal large batch training and complicated data augmentation, train the model for same number of epochs ii) use normal large batch training and complicated data augmentation, train the model for same number of iterations ii) use normal large batch training and complicated data augmentation, scale the learning rate up as in Goyal et al. 2017 4. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_21 For theorem 1, it is hard to say how much the theoretical analysis based on linear approximation near global minimizer would help understand the behavior of SGD. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_22 I fail to understand the the authors augmentation ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_23 Following the authors logic, normal large batch training decrease the variability of _k and which converges to flat minima. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_24 It contradicts with the authors other explanation ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_26 In section 4.2, I fail to understand why the proposed method can affect the norm of gradient ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_28 Related works: Smith et al. 2018 Don't Decay the Learning Rate, Increase the Batch Size. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_29 after rebuttal ==================== I appreciate the authors' response, but I do not think the rebuttal addressed my concerns ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_30 I will keep my score and argue for the rejection of this paper ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_31 My main concern is that the benefit of this method is unclear ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_32 The main baseline that has been compared is the standard small-batch training. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_33 However, the proposed method use a N times larger batch and same number of iterations, and hence N times more computation resources. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_34 Moreover, the proposed method also use N times more augmented samples. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_35 Like the authors said, they did not propose new data augmentation method, and their contribution is how to combine data augmentation with large-batch training. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_36 However, I am not convinced by the experiments that the good performance is from the proposed method, not from the N times more augmented samples ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_37 I have suggested the authors to compare with stronger baselines to demonstrate the benefits. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_38 However, the authors quote a previous paper that use different data augmentation and (potentially) other experimental settings. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_242_2_39 The proposed method looks unstable ['con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_40 Moreover, instead of showing the consistent benefits of large batch, the authors tune the batchsize as a hyperparameter for different experiments ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_41 Regarding the theoretical part, I still do not follow the authors' explanation ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_242_2_42 I think it could at least be improved for clarity . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr19_261_3_0 This paper presents CoDraw, a grounded and goal-driven dialogue environment for collaborative drawing. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_1 The authors argue convincingly that an interactive and grounded evaluation environment helps us better measure how well NLG/NLU agents actually understand and use their language rather than evaluating against arbitrary ground-truth examples of what humans say, we can evaluate the objective end-to-end performance of a system in a well-specified nonlinguistic task. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_2 They collect a novel dataset in this grounded and goal-driven communication paradigm, define a success metric for the collaborative drawing task, and present models for maximizing that metric. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_3 This is a very interesting task and the dataset/models are a very useful contribution to the community ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_261_3_4 I have just a few comments below: ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_6 Im not sure how impressed I should be by these results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_261_3_7 The humanhuman similarity score is pretty far above those of the best models , even though MTurkers are not optimized (and likely not as motivated as an NN) to solve this task. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_8 You might be able to convince me more if you had a stronger baseline e.g. a bag-of-words Drawer model which works off of the average of the word embeddings in a scripted Teller input ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_261_3_9 Have you tried baselines like these? ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_11 Please provide variance measures on your results (within model configuration, across scene examples). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_12 Are the machinemachine pairs consistently performing well together ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_261_3_13 Are the humans ['con', 'con', 'con'] paper quality +iclr19_261_3_14 Depending on those variance numbers you might also consider doing a statistical test to argue that the auxiliary loss function and and RL fine-tuning offer certain improvement over the Scene2seq base model ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_261_3_16 Framing: there is a lot of work in collaborative / multi-agent dialogue models which you have missed see refs below to start ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_261_3_17 You should link to this literature (mostly in NLP) and contrast your task/model with theirs ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_261_3_18 References Vogel & Jurafsky (2010). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_19 Learning to follow navigational directions. ['non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_21 Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_23 Unified pragmatic models for generating and following instructions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_25 Speaker-follower models for vision-and-language navigation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_261_3_27 The red one! ['non', 'non', 'non', 'non'] paper quality +iclr19_261_3_28 On learning to refer to things based on their discriminative properties. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_0 Overview: The authors aim at finding and investigating criteria that allow to determine whether a deep (convolutional) model overfits the training data without using a hold-out data set. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_1 Instead of using a hold-out set they propose to randomly flip the labels of certain amounts of training data and inspect the corresponding 'accuracy vs. randomization curves. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_2 They propose three potential criteria based on the curves for determining when a model overfits and use those to determine the smallest l1-regularization parameter value that does not overfit. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_3 I have several issues with this work ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_4 Foremost, the presented criteria are actually not real criteria (expect maybe C1) but rather general guidelines to visually inspect 'accuracy over randomization curves ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_5 The criteria remain very vague and seem be to applicable mainly to the evaluated data set (e.g. what defines a steep decrease?). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_6 Because of that, the experimental evaluation remains vague as well, as the criteria are tested on one data set by visual inspection ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_7 Additionally, only one type of regularization was assumed, namely l1-regularization, though other types are arguably more common in the deep (convolutional) learning literature ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_8 Overall, I think this paper is not fit for publication, because the contributions of the paper seem very vague and are neither thoroughly defined nor tested ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_9 Detailed remarks: General: A proper definition or at least a somewhat better notion of overfitting would have benefitted the paper ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_10 In the current version, you seem to define overfitting on-the-fly while defining your criteria. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_11 You mention complexity of data and model several times in the paper but never define what you mean by that ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_12 Detailed: Page 3, last paragraph: Why did you not use bias terms in your model? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_13 Page 4, Assumption. ['non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_14 What do you mean by the data being independent ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_15 Independent and identically distributed? ['non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_16 As in that case correlation in the data can be destroyed by the introduction of randomness making the data easier to learn. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_17 "What do you mean by ""easier to learn""?" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_19 Better training error? ['non', 'non', 'non', 'non'] paper quality +iclr19_304_3_20 I dont understand the assumptions ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_21 You state that the regularization parameter should decrease complexity of the model. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_22 Is that an assumption? ['non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_23 And how do you use that later ['non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_24 "What does ""similar scale mean" ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_25 Page 4, Monotony. ['non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_26 You state two assumptions or claims, 'the accuracy curve is strictly monotonically decreasing for increasing randomness and 'we also expect that accuracy drops if the regularization of the model is increased, and then state that 'This shows that the accuracy is strictly monotonically decreasing as a function of randomness and regularization. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_27 Although you didnt show anything but only state assumptions or claims (which may be reasonable but are not backed up here) ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_28 I actually dont understand the purpose of this paragraph ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_29 Section 3.3 is confusing to me ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_30 What you actually do here is you present 3 different general criteria that could potentially detect overfitting on label-randomized training sets. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_31 But you state it as if those measures are actually correct, which you didnt show yet ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_32 My main concern here, besides the motivations that I did not fully understand (s.b. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_33 is the lack of measurable criteria ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_34 While for criterion 1 you define overfitting as 'above the diagonal line and underfitting as below the line, which is at least measurable depending on sample density of the randomization, such criteria are missing for C2 and C3 ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_35 Instead, you present vague of sharp drops and two modes but do not present rigorous definitions ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_36 You present a number for C2 in Section 5, but that is only applicable to the present data set (i.e. assuming that training accuracy is 1). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_37 Criterion 2 (b) is not clear ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_38 "I neither understand ""As the accuracy curve is also monotone decreasing with increasing regularization we will also detect the convexity by a steep drop in accuracy as depicted by the marked point in the Figure 1(b)"" nor do I understand ""accuracy over regularization curve (plotted in log-log space) is constant ""?" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non'] paper quality +iclr19_304_3_39 Does that mean that you assume that whenever the training accuracy drops lower than that of the model without regularization, it starts to underfit? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_304_3_40 Due to the lack of numerical measures , the experimental evaluation necessarily remains vague by showing some graphs that show that all criteria are roughly met by regularization parameter on the cifar data set ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_304_3_41 In my view, this evaluation of the (vague) criteria is not fit for showing their possible merit . ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr19_495_1_0 This paper generalizes basic policy gradient methods by replacing the original Gaussian or Gaussian mixture policy with a normalizing flow policy, which is defined by a sequence of invertible transformations from a base policy. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_495_1_1 Although the concept of normalizing flow is simple, and it has been applied to other models such as VAE , there seems no work on applying it for policy optimization ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_495_1_2 Thus I think this method is itself interesting ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_495_1_3 However, I find the paper written in a way assuming readers very familiar with related concept and algorithms in reinforcement learning ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_4 Thus although one can get the general idea on how the method works , it might be difficult to get a deeper understanding on some details ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_5 For example, normalizing flows are defined in Section 4, and then it is directly claimed that normalizing flows can be applied to policy optimization, without giving details on how it is actually applied, e.g., what is the objective function ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_6 and why one needs to compute gradients of the entropy (Section 4.1)? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_495_1_7 Also, in the experiments, it is said that one can combing normalizing flows with TRPO without describing the details ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_8 I can't get how exactly normalizing flows + TRPO works ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_9 The experiments also talk about 2D bandit problem, and again, without any descriptions ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_10 BTW, in the Section 4.3, what does [-1, 1]^2 mean ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_11 I have seen {-1, 1}^2, but not [-1, 1]^2). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_495_1_12 It seems that the authors only use the basic normalizing flow structures studied in Rezende&Mohamed (2015) and Dinh et al (2016) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_13 However, there are more powerful variants of normalizing flows such as the Multiplicative Normalizing Flows or the Glow ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_14 I wonder how good the results are if these more advanced versions are used. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_495_1_15 Maybe they can uniformly outperform Gaussian policy? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_495_1_16 Update: I feel the idea of this paper is straightforward, and the contribution is incremental ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_495_1_17 To improve the paper, stronger experiments need to be performed . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr19_601_3_0 Authors propose a decoder arquitecture model named Subscale Pixel Network. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_601_3_1 It is meant to generate overall images as image slice sequences with memory and computation economy by using a Multidimensional Upscaling method. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_601_3_2 The paper is fairly well written and structured, and it seems technically sound ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_601_3_3 Experiments are convincing ['pro', 'pro', 'pro'] paper quality +iclr19_601_3_4 Some minor issues: Figure 2 is not referenced anywhere in the main text ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_601_3_5 Figure 5 is referenced in the main text after figure 6 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_601_3_6 Even if intuitively understandable, all parameters in equations should be explicitly described (e.g., h,w,H,W in eq.1) ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_0 This paper proposes the deep reinforcement learning with ensembles of Q-functions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_1 Its main idea is updating multiple Q-functions, instead of one, with independently sampled experience replay memory, then take the action selected by the ensemble. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_2 Experimental results demonstrate that the proposed method can achieve better performance than non-ensemble one under the same training steps, and the decision space can also be stabilized. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_3 This paper is well-written ['pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_659_2_4 The main ideas and claims are clearly expressed ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_659_2_5 Using ensembles of Q-function can naturally reduce the variance of decisions, so it can speed up the training procedure for certain tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_6 This idea is simple and works well ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_659_2_7 The main contribution is it provides a way to reduce the number of interactions with the environment. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_8 My main concern about the paper is the time cost. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_9 Since the method requires updating multiple Q-functions, it may cost much more time for each RL time step, so Im not sure whether the ensemble method can outperform the non-ensemble one within the same time period ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_659_2_10 This problem is important for practical usage ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_659_2_11 However, the authors didnt show these results in the paper ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_659_2_12 Minor things: + The main idea is described too sketchily ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_659_2_13 I think more examples, such as in section 8.1, should be put in the main text ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_0 The paper proposes a modular approach to the problem of mapping instructions to robot actions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_1 The first of two modules is responsible for learning a goal embedding of a given instruction using a learned distance function. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_2 The second module is responsible for mapping goals from this embedding space to control policies. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_3 Such a modular approach has the advantage that the instruction-to-goal and goal-to-policy mappings can be trained separately and, in principle, allow for swapping in different modules. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_4 The paper evaluates the method in various simulated domains and compares against RL and IL baselines. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_5 STRENGTHS + Decoupling instruction-to-action mapping by introducing goals as a learned intermediate representation has advantages, particularly for goal-directed instructions ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_866_1_6 Notably, these together with the ability to train the components separately will generally increase the efficiency of learning ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_866_1_7 WEAKNESSES - The algorithmic contribution is relatively minor, while the technical merits of the approach are questionable ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_8 The goal-policy mapping approach would presumably restrict the robot to goals experienced during training, preventing generalization to new goals ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_9 This is in contrast to semantic parsing and symbol grounding models, which exploit the compositionality of language to generalize to new instructions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_10 The trajectory encoder operates differently for goal-oriented vs. trajectory-oriented instructions, however it is not clear how a given instruction is identified as being goal- vs. trajectory-oriented ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_11 While there are advantages to training the modules separately , there is a risk that they are reasoning over different portions of the goal space ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_12 A contrastive loss would seemingly be more appropriate for learning the instruction-goal distance function ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_13 The goal search process relies on a number of user-defined parameters - The nature of the instructions used for experimental evaluations is unclear ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_14 Are they free-form instructions ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_15 How many are there ['con', 'con', 'con', 'con'] paper quality +iclr19_866_1_16 Where do they come from ['con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_17 How different are the familiar and unfamiliar instructions ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_18 Similarly, what is the nature of the different action spaces ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_19 The domains considered for experimental evaluation are particularly simple ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_20 It would be better to evaluate on one of the few common benchmarks for robot language understanding, e.g., the SAIL corpus, which considers trajectory-oriented instructions ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_21 The paper provides insufficient details regarding the RL and IL baselines, making it impossible to judge their merits ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_22 The paper initially states that this distance function is computed from learned embeddings of human demonstrations, however these are presumably instructions rather than demonstrations ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_23 I wouldn't consider the results reported in Section 4.5 to be ablative studies ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_24 The paper incorrectly references Mei et al. 2016 when stating that methods require a large amount of human supervision (data annotation) and/or linguistic knowledge. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_25 In fact Mei et al. 2016 requires no human annotation or linguistic knowledge. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_866_1_26 "Relevant to the discussion of learning from demonstration for language understanding is the following paper by Duvallet et al. Duvalet, Kollar, and Stentz, ""Imitation learning for natural language direction following through unknown environments,"" ICRA 2014 - The paper is overly verbose and redundant in places" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_866_1_27 There are several grammatical errors - The captions for Figures 3 and 4 are copied from Figure 1 ['con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_938_3_0 Summary Authors present a decentralized policy, centralized value function approach (MAAC) to multi-agent learning. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_938_3_1 They used an attention mechanism over agent policies as an input to a central value function. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_938_3_2 Authors compare their approach with COMA (discrete actions and counterfactual (semi-centralized) baseline) and MADDPG (also uses centralized value function and continuous actions) MAAC is evaluated on two 2d cooperative environments, Treasure Collection and Rover Tower. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_938_3_3 MAAC outperforms baselines on TC, but not on RT. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_938_3_4 Furthermore, the different baselines perform differently: there is no method that consistently performs well. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_938_3_5 Pro - MAAC is a simple combination of attention and a centralized value function approach ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_938_3_6 Con - MAAC still requires all observations and actions of all other agents as an input to the value function, which makes this approach not scalable to settings with many agents ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_938_3_7 The centralized nature is also semantically improbable , as the observations might be high-dimensional in nature, so exchanging these between agents becomes impractical with complex problems. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_938_3_8 MAAC does not consistently outperform baselines , and it is not clear how the stated explanations about the difference in performance apply to other problems ['con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_938_3_9 Authors do not visualize the attention (as is common in previous work involving attention in e.g., NLP) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_938_3_10 It is unclear how the model actually operates and uses attention during execution ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_938_3_11 Reproducibility - It seems straightforward to implement this method, but I encourage open-sourcing the authors' implementation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_0 Summary This paper proposes an evolutionary-based method for the multi-objective neural architecture search, where the proposed method aims at minimizing two objectives: an error metric and the number of FLOPS. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_1 The proposed method consists of an exploration step and an exploitation step. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_2 In the exploration step, architectures are sampled by using genetic operators such as the crossover and the mutation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_3 In the exploitation step, architectures are generated by a Bayesian Network. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_4 The proposed method is evaluated on object classification and object alignment tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_5 Pros - The performance of the proposed method is better than the existing multi-objective architecture search methods in the object classification task ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_997_3_6 The effect of each proposed technique is appropriately evaluated ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr19_997_3_7 Cons - The contribution of the proposed method is not clear to me ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_997_3_8 The proposed method is compared with the existing multi-objective methods in terms of classification accuracy, but if we focus on that point, the performance (i.e., error rate and FLOPs) of the proposed method is almost the same as those of the random search judging from Table 4 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_997_3_9 It would be better to compare the proposed method to the existing multi-objective methods in terms of classification accuracy and other objectives ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_997_3_10 This paper argues that the choice of the number of parameters is sub-optimal and ineffective in terms of computational complexity. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_11 Please provide more details about this point. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr19_997_3_12 For example, what is the drawbacks of the number of parameters, what is the advantages of FLOPs for multi-objective optimization ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_997_3_13 Please elaborate on the procedure and settings of the Bayesian network used in this paper ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr19_997_3_14 It would be better to provide discussions of recent neural architecture search methods solving the single-objective problem . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr20_1042_2_0 This paper tackles the problem of catastrophic forgetting when data is organized in a large number of batches of data (tasks) that are sequentially made available. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_1 To avoid catastrophic forgetting, the authors learn a VAE that generates the training data (both inputs and labels) and retrain it using samples from the new task combined with samples generated from the VAE trained in the previous tasks (generative replay). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_2 In this way, there's no need to store all past data and even the first learned batch keeps being refreshed and should not be forgotten. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_3 I like that this paper uses a single global probabilistic model instead of separate discriminative and generative ones ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_1042_2_4 Unfortunately, there are several things that left me unconvinced about this paper: 1) Presentation of the paper - Variables x, y, z are introduced and talked about without explanation ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_5 The graphical model or factorization assumptions are not even mentioned until after the loss has been defined ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_6 A normal flow is to first describe the model and what the involved variables mean, and then talk about what the loss for learning it should be, not the other way around. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_7 "Text contradicting the equation : ""In order to balance the individual loss terms, we normalize according to dimensions and weight the KL divergence with a constant of 0.1""." ['con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_8 But equation (2) shows a loss with no weighting. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_9 I'm assuming the text is correct, but then a beta should be added to the equation in front of the KL divergence. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_10 Tables and figures are inconveniently far from where they are referenced in the text ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_11 2) Theoretical inconsistencies Although the system might work overall, two things seem to be technically incorrect : - The decoder and classifier are expected to approximate the distribution of training data according to the authors (for valid generative replay). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_12 This is not true in a beta-VAE ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_13 The weighting of the KL that the authors introduce is going to bias the learned generator towards the high probability regions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_14 This is not a sound mechanism to achieve an as-faithful-as-possible (limited by the expressiveness of the encoder-decoder architectures) approximation to the training data ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_15 A Weibull distribution is used to model the same data, again, in a different way. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_16 I.e., there are two different probabilistic models modeling the same data in inconsistent ways and one or the other is used depending on the part of the system ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_17 As an example, q(z) could be arbitrarily multimodal as far as the encoder is concerned, but the Weibull seems to force one mode per class. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1042_2_18 But regardless of this, both models are inconsistent .) ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'non', 'non'] paper quality +iclr20_1042_2_19 Similarly, the proposed rejection sampling scheme of OCDVAE is not consistent with the theory of VAEs and it's a post-hoc tweak that is not theoretically expected to provide a pdf of data with lower KL divergence to the true data pdf ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1042_2_20 3) Experiments Finally, the experimental results do not look very compelling , it seems to be overall worse than the baselines in the two image datasets and slightly better in the audio dataset, so it's unclear that this approach is superior ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_0 This paper proposes studying adversarial examples from the perspective of Bayes-optimal classifiers. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_1 They construct a pair of synthetic but somewhat realistic datasetsin one case, the Bayes-optimal classifier is *not* robust, demonstrating that the Bayes-optimal classifier may not be robust for real-world datasets. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_2 In the other case, the Bayes-optimal classifier is robust, but neural networks fail to learn the robust decision boundary. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_3 This demonstrates that even when the Bayes-optimal classifier is robust, we may need to explicitly regularize/incentivize neural networks to learn the correct decision boundary. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_4 The contribution of the two datasets (the symmetric and asymetric CelebA) is, in my opinion, an extremely important contribution in studying adversarial robustness and on their own these datasets warrant further study ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_1493_2_5 Previously, all studies of this sort had to be done with small-scale classifiers and simplistic datasets such as Gaussians. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_6 The paper also definitively proves that there are realistic datasets where the Bayes-optimal classifier is non-robust, which goes against quite a bit of conventional wisdom in the field and opens up many new paths for research ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_1493_2_7 However, there are a few (in my opinion) critical concerns that currently bar me from strongly recommending acceptance of the paper ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_8 I outline these below. ['non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_10 Prior work: the paper seems to ignore a plethora of prior work around studying adversarial robustness and understanding its roots ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_11 For example, a few very closely related works are as follows: - Adversarial examples are not Bugs, they are Features (pseudo-url): Ilyas et al (2019) demonstrate that adversarial perturbations are not in meaningless directions with respect to the data distribution, and in fact a classifier can be recovered from a labeled dataset of adversarial examples. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_12 While not in conflict with this work, it does closely relate and discuss many of the same issues discussed in this work, so relating them would be fruitful ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_13 A Discussion of Adversarial Examples are not Bugs they are Features (pseudo-url): Nakkiran (2019) actually constructs a dataset (called adversarial squares) where the Bayes-optimal classifier is robust but neural networks learn a non-robust classifier due to label noise and overfitting. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_14 Interestingly, they also construct a dataset where they Bayes-optimal classifier is robust and neural networks *do* learn a robust classifier (adversarial squares sans label noise). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_15 While I think the datasets presented in this work are much more interesting and certainly more realistic , this work should be put in context ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_16 Excessive Invariance causes Adversarial Vulnerability (pseudo-url): Jacobsen et al offers an explanation for adversarial examples based on the fact that NNs are not sensitive to many task-relevant changes in inputs, which seems to tie in nicely to the discussion in this paper, as under the presented setup the Bayes-optimal classifier will certainly exploit (and be somewhat sensitive) to such changes. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_17 Adversarially robust generalization requires more data (pseudo-url): Schmidt et al show a setup where many more samples are required for adversarial robustness than for standard classification error. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_18 And it seems to have very relevant connections to your work. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_19 In general this list is not comprehensive either: there are many relevant connections to the robustness-accuracy tradeoff (pseudo-url, pseudo-url), and other works. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_21 Discussion/interpretation of the results: - Sufficient vs necessary: While the experimental design and results are both of very high quality , I am slightly confused about the interpretation of the results ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_22 First, if my understanding of the paper is correct, the experiments show that (a) the Bayes-optimal classifier can be non-robust in real-world settings, and (b) even when the Bayes-optimal classifier is robust, NNs can learn a non-robust decision boundary. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_23 "In particular, (b) indicates that it may be *necessary* to design regularization methods that steer NNs towards the correct decision boundaryit says nothing about whether these regularization methods will be *sufficient* , which the paper seems to suggest, e.g. in the abstract ""our results suggest that adversarial vulnerability is not an unavoidable consequence of machine learning in high dimensions, and may often be a result of suboptimal training methods used in current practice.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_24 In fact, if real-world datasets end up being like the asymmetric dataset, then the results of this paper would actually indicate the *opposite* of the above statement ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_25 It is unclear on what basis one can say that real-world datasets are more like the symmetric case or the asymmetric case ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_26 I believe a more measured conclusion (perhaps that we *need* more regularization methods, but even then we may not be able to get perfect robustness and accuracy) would better fit the strong results presented in the paper ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_27 CNN vs Linear SVM: I am confused about why we would expect a CNN to be able to learn the Bayes-optimal decision boundary but not the Linear SVM ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_28 The paper justifies the adversarial vulnerability of the Linear SVM by arguing that the Bayes-optimal classifier is not in the Linear SVM hypothesis class, which makes sense ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_1493_2_29 The RBF SVM, for small enough bandwidth can express any function and is convex, so no argument needs to be made about its ability to find the Bayes-optimal classifier. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_30 "For CNNs, however, it is unclear if the Bayes-optimal classifier lies in the hypothesis class (there are ""universal approximation"" arguments but these usually require arbitrarily wide networks and are non-constructive)couldn't it be that the CNNs used here is in the same boat as the Linear SVM (i.e. the Bayes-optimal decision boundary is not expressible by the CNN?)" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_32 Experimental setup: - One somewhat concerning (but perhaps unavoidable) thing about the experimental setup is that all the considered datasets are not perfectly linearly separable , i.e. the Bayes-optimal classifier has non-zero test error in expectation, and moreover the data variance is full-rank in the embedded space. ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_33 This is in stark contrast to real datasets, where there seem to be many different ways to perfectly separate say, dogs from cats, and the variance of the data seems to be very heavily concentrated in a small subset of directions ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_34 I am concerned that these properties are what drive the Bayes-optimal classifier for the symmetric dataset to be robust (concretely, if 0.01 * Identity was not added to the covariance matrix of the symmetric model and the covariance was left to be low-rank, then any classifier which was Bayes-optimal along the positive-variance directions would be Bayes-optimal, and could behave arbitrarily poorly along the zero-variance directions, still being vulnerable). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_35 This concern does not make the contribution of the symmetric dataset less valuable , but a discussion of such caveats would help further elucidate the similarities and differences of this setup from real datasets ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_36 It is unclear if what is lacking from the NN is explicit regularization, or just more data. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_37 In particular, with such low-variance directions, at standard dataset sizes the distributions generated here are most likely statistically indistinguishable from their robust/non-robust counterparts (you can see hints of this in the fact that the CNN gets . ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_38 While completely alleviating this concern may once again be quite difficult/impossible , it could be significantly alleviated by generating training samples dynamically (at every iteration) instead of generating a dataset in one shot and training on it ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_1493_2_39 It would be very interesting to see whether these results differ at all from the one-shot approach here. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_41 A suggestion rather than a concern and not impacting my current score: but it would be very interesting to see what happens for robustly trained classifiers on the symmetric and asymmetric datasets. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_42 Overall, this paper is a very promising step in studying adversarial robustness , but concerns about discussion of prior work, discussion of experimental setup, and conclusions drawn, currently bar me from recommending acceptance ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1493_2_43 I would be more than happy to significantly improve my score if these concerns can be addressed in the revision and corresponding rebuttal. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_0 The paper introduces CATER: a synthetically generated dataset for video understanding tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_1 "The dataset is an extension of CLEVR using simple motions of primitive 3D objects to produce videos of primitive actions (e.g. pick and place a cube), compositional actions (e.g. ""cone is rotated during the sliding of the sphere""), and finally a 3D object localization tasks (i.e. where is the ""snitch"" object at the end of the video)." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_2 The construction of the dataset focuses on demonstrating that compositional action classification and long-term temporal reasoning for action understanding and localization in videos are largely unsolved problems, and that frame aggregation-based methods on real video data in prior work datasets, have found relative success not because the tasks are easy but because of dataset bias issues. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_3 A variety of models from recent work are evaluated on the three proposed tasks, demonstrating the validity of the above motivation for the construction of the dataset. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_4 "The primitive action classification task is ""solved"" by nearly all methods and only serves for debugging purposes." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_5 The compositional action classification task is harder and shows that incorporating LSTMs for temporal reasoning leads to non-trivial performance improvements over frame averaging. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_6 Finally, the localization task is challenging, especially when camera motion is introduced, with much space for improvement left for future work. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_7 I am positive with respect to acceptance of this paper ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_8 It is a well-argued, thoughtful dataset contribution that sets up a reasonable video understanding dataset ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_1724_2_9 The authors recognize that since the dataset is synthetically generated it is not necessarily predictive of how methods would perform with real-world data, but still it can serve a useful and complementary role similar to the one CLEVR has served in image understanding ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_1724_2_10 I have a few minor comments / questions / editing notes that would be good to address: - The random baseline isn't described in the main text , it would be good to briefly mention it (this will also help to clarify why the value is particularly high for tasks 1 and 2) - The grid resolution ablation results presented in the supplement are actually quite important -- they demonstrate that with a small increase in granularity of the grid the traditional tracking methods begin to be the best performers. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_1724_2_11 As this direction (of increased resolution to make the problem less artificial) is likely to be important, a brief discussion of this finding from the main paper text would be appropriate - p3 resiliance -> resilience - p4 objects is moved -> object is moved - p6 actions itself -> actions themselves; builds upon -> build upon - p7 looses all -> loses all; suited our -> suited to our; render's camera parameters -> render camera parameters; to solve it -> to solve the problem - p8 (Xiong, b;a) and (Xiong, b) -> these references are missing the year; models needs to -> models need to - p9 phenomenon -> phenomena; the the videos -> the videos; these observation -> these observations; of next -> of the next; in real world -> in the real world ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_0 This paper proposes A*MCTS, which combines A* and MCTS with policy and value networks to prioritize the next state to be explored. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_1 It further establishes the sample complexity to determine optimal actions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_2 Experimental results validate the theoretical analysis and demonstrate the effectiveness of A*MCTS over benchmark MCTS algorithms with value and policy networks ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_2046_2_3 Pros: This paper presents the first study of tree search for optimal actions in the presence of pretrained value and policy networks ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_2046_2_4 And it combines A* search with MCTS to improve the performance over the traditional MCTS approaches based on UCT or PUCT tree policies ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_2046_2_5 Experimental results show that the proposed algorithm outperform the MCTS algorithms ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_2046_2_6 Cons: However, there are several issues that should be addressed including the presentation of the paper : The algorithm seeks to combine A* search with MCTS (combined with policy and value networks), and is shown to outperform the baseline MCTS method. ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_7 However, it does not clearly explain the key insights of why it could perform better ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_8 For example, what kind of additional benefit will it bring when integrating the priority queue into the MCTS algorithms ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_9 How could it improve over the traditional tree policy (e.g., UCT) for the selection step in MCTS ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_10 These discussions are critical to understand the merit of the proposed algorithms. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_11 In addition, more experimental analysis should also be presented to support why such a combination is the key contribution to the performance gain ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_12 Many design choices for the algorithms are not clearly explained ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_13 For example, in line 8 of Algorithm 2, why only the top 3 child nodes are added to the queue ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_14 The complexity bound in Theorem 1 is hard to understand ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_15 It does not give the explicit relations of the sample complexity with respect to different quantities in the algorithms ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_16 In particular, the probability in the second term of Theorem 1 is hard to parse ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_17 The authors need to give more discussion and explanation about it ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_18 This is also the case for Theorems 2-4. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_19 The authors give some concrete examples in Section 6.2 for these bounds ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_2046_2_20 However, it would be better to have some discussion earlier right after these theorems are presented ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_21 The experimental results are carried out under the very simplified settings for both the proposed algorithm and the baseline MCTS ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_22 In fact, it is performed under the exact assumption where the theoretical analysis is done for the A*MCTS. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_23 This may bring some advantage for the proposed algorithm. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_24 It is not clear whether such assumptions hold for practical problems ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2046_2_25 More convincing experimental comparison should be done under real environment such as Atari games (by using the simulator as the environment model as shown in [Guo et al 2014] Deep learning for real-time atari game play using offline monte-carlo tree search planning). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_26 Other comments: It is assumed that the noise of value and policy network is zero at the leaf node. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_27 In practice, this is not true because even at the leaf node the value could still be estimated by an inaccurate value network (e.g., AlphaGo or AlphaZero). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_28 How would this affect the results? ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_29 In fact, the proof of the theorems could be moved to appendices. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2046_2_30 In the first paragraph of Section 6.2, there is a typo : V*=V_{l*}=\eta should be V*-V_{l*}=\eta ? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_0 This paper aims at solving geometric bin packing (2D or 3D) problems using a deep reinforcement learning framework. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_1 Namely, the framework is based on the actor-critic paradigm, and uses a conditional query learning model for performing composite actions (selections, rotations) in geometric bin packing. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_2 Experiments are performed on several instances of 2D-BPP and 3D-BPP, Overall, bin packing problems are challenging tasks for DRL, and I would encourage the authors to pursue this research topic. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_3 Unfortunately, I believe that the current manuscript is at a too early stage for being accepted at ICLR , due to the following reasons: (a) The paper is littered with spelling/grammar mistakes (just take the second sentence: With the developing -> development). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_4 For the next versions of the manuscript, I would recommend using a spell/grammar checker. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_5 b) In the related work section, very little is said about Bin Packing Problems ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_6 There are various classes of BPPs, and it would be relevant to briefly present them. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_7 Moreover, BPPs have been extensively studied in theoretical computer science, with various approximation results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_8 Again, a brief discussion about those results would be relevant ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_9 Notably, several classes of geometric bin packing problems admit polynomial-time approximation algorithms (for extended surveys about this topic, see e.g. Arindam Khans Ph.D. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_10 thesis 2015; Christensen et. al. Computer Science Review 2017). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_11 c) According to the problem formulation and the experiments, it seems that the authors are studying a restricted subclass of 2D/3D bin packing problems: there is only one bin, so (it seems that) the authors are dealing with geometric knapsack problems (with rotations). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_12 Note that the 2D Knapsack problem with rotations admits a 3/2 + \epsilon - approximation algorithm (Galvez et. al., FOCS 2017). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_13 A. Khan has also found approximation algorithms for the 3D Knapsack problem with rotations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_14 So, even if those results do not preclude the use of sophisticated DRL techniques for solving geometric knapsack problems, it would be legitimate to empirically compare these techniques with the polytime asymptotic approximation algorithms already found in the literature. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_15 d) The problem formulation is very unclear ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_16 Namely, the state representation is ambiguous: pseudo-formula is obviously not a boolean variable, but a boolean vector (where each component is associated with an item) ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_17 Nothing is said about actions and transitions and rewards (we have to read the AC framework in order to get a clue of these components). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_18 We dont know if it is an episodic MDP (which is usually the case in DRL approaches to combinatorial optimization tasks). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_19 Also, it seems that the MDP is specified for a single instance of 3D-BPP. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_20 But this looks wrong since it should include the distribution of all instances of 3D-BPP. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2094_1_21 e) The Actor-Critic framework, coupled with a conditional query learning algorithm, is unfortunately unintelligible due to the fact that many notations are left unspecified ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_22 For example, in Eq (1) what are the dimensions K and V ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_23 In Eq (2) what is d_i ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con'] paper quality +iclr20_2094_1_24 In the algorithm what is n_{gae} ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_25 Also in the algorithm, what are l_i, w_i and h_i ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_26 Etc. (f) Even if the aforementioned issues are fixed, it seems that the framework is using many hyper-parameters (\gamma, \beta, \alpha_t, etc.) which are left unspecified ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2094_1_27 Under such circumstances, it is quite impossible to reproduce experiments . ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr20_2157_3_0 The paper presents expected gradients which is a method which looks at a difference from a baseline defined by the training data. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2157_3_1 The structure of the paper is strange because it discusses attribution priors but then they are not used for the method ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_2 The paper should have a single focus ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_3 Attribution priors as you formalize it in section 2 (which seems like the core contribution of the paper) was introduced in 2017 pseudo-url where they use a mask on a saliency map to regularize the representation learned. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2157_3_5 I think a few papers to have a look at are a survey article about graph based biasing pseudo-url as well as methods for using graph convolutions with biases based on graphs: pseudo-url and pseudo-url . ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_2157_3_6 Some of these should serve as baselines ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_7 It is not clear which model is used in Figure 2 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_8 It is also not clear from the literature if these models are really working so I think these results should be presented in a more detail ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_9 As I understand it, real improvements in predicting clinical variables has not been shown to be reproducible so this would be a significant claim of this paper ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_10 "It is not clear if the paper is presenting ""expected gradients"" or existing attribution priors" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_11 Most of the experiments revolve around existing attribution prior methods ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_12 So with that the paper positions itself not as a survey but as a method paper but lacks evidence that the method expected gradients performs better ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_13 I am also not clear on where the image attribution prior comes from for the image task ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_14 Where is this extra information ['con', 'con', 'con', 'con', 'con'] paper quality +iclr20_2157_3_15 Is it just smoothing ? ['con', 'con', 'con', 'con', 'non'] paper quality +iclr20_305_3_1 Summary The authors apply MARL to principal-agent / mechanism design problems where selfish agents need to be incentivized to coordinate towards a leader's (collective) goal. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_2 The leader is modeled as a semi-MDP with event-based policy gradients and modules to model/predict followers' actions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_3 "The leader sends messages to followers, an ""event"" is a pair (timestep, message of leader to a follower)." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_4 A `termination' menas that an agent should stop executing the previous selected action; the leader signals as such to the agent. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_5 With this modeling step, the authors formulate an event-based policy gradient, which considers models for which goal to send to followers and when. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_6 The authors compare this approach on 4 environments with M3RL, which also solves (extensions of) principal-agent problems. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_8 Decision (accept or reject) with one or two key reasons for this choice. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_11 Supporting arguments The approach seems sound and conceptually related to a multi-agent generalization of STRAW pseudo-url, where a planner predicts / commits to an action-plan for a single agent ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_305_3_13 Additional feedback with the aim to improve the paper. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_305_3_14 Make it clear that these points are here to help, and not necessarily part of your decision assessment. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_0 This paper presents a black-box style learning algorithm for Markov Random Fields (MRF). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_1 The approach doubles down on the variational approach with variational approximations for both the positive phase and negative phase of the log likelihood objective function. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_2 For the negative phase, the authors use two separate variational approximations, one of which involves the modeling of the latent variable prior under the approximating distribution, The approach is novel , as far as I know, though not particularly so, and I view this as one of the weak point of the paper ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_3 That said, it does seems like a fairly creative combination of existing approaches ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_4 As others have found in the past, a variational approximation to the partition function contribution to the loss function (i.e. the negative phase) results in the loss of the variational lower bound on log likelihood and the connection between the resulting approximation and the log likelihood becomes unclear. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_5 To deal with this issue, the authors argue (in Lemma 1) that the gradient of their approximate objective is at least in the same direction as the ELBO (lower bound) objective. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_6 The result is fairly obvious , but the conditions for validity have interesting consequences for the training algorithm, as it relates the approximation error to the norm of the gradient of the ELBO loss ['con', 'con', 'con', 'con', 'con', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_526_3_7 I have a minor issue with the discussion (in the last paragraph of sec. 3.2) stating that the theoretical statement of the proposed objective relies on a much weaker assumption than the nonparametric assumption made in the theoretical justification of GANs ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_8 While I agree with the statement as such , the GAN development makes a stronger statement about the nature of the learning trajectory ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_9 Specifically, it states that the generator is minimizing a Jenson-Shannon divergence which has a fixed point at the true data density. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_10 In the current development, Theorem 1 only states that the optimization process will converge to the stationary points of the approximate ELBO objective (L1 in the paper's notation). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_11 Clarity: I found the paper to be very well written with a clear exposition of the material and sound development of the technical details ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_526_3_12 Relevance and Significance: This paper is highly relevant to the ICLR community and -- to the extent that one believes that training and inference in MRFs is important -- also significant ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_526_3_13 One this last point, it seems ironic to me that the proposed strategy for training the MRF is through the use of three separate directed graphical models (an encoder q(h | x), a decoder and a VAE to model the approximate prior over the latents h). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_14 In most modeling situations, one would simply impose the directed graphical model directly and skip the formalization in terms of an MRF. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_15 I would appreciate a more forceful motivation of the relevance of MRFs rather than just stating it as a important model with applications ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_16 What is unique about the MRF formalism that -- for practical applications -- could not be effectively captured in a directed graphical model ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_17 I note that I am aware of the theoretical representation differences between directed and undirected models, I am wondering how these differences actually matter in practical applications at scale ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_18 Experiments: The authors show the empirical advantages offered by the proposed method over the existing literature ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_526_3_19 I was surprised not to see how this model performs on the binarized MNIST dataset, and would like to see that result as well as CIFAR likelihood ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_20 MNIST, in particular, is a well studied dataset that many readers will be able to easily interpret. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_21 Its absence seems like a serious omission ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_22 "What is meant by ""RBM loss"" in Fig. 2(d), I do not see this defined" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_23 I am somewhat alarmed at the use of 100 updates of the joint model q(v,h) (K1 = 100) for every update of the other parameters ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_24 For larger scale domains, I fear this could become an important obstacle to effective model training ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_25 The comparison to PCD-1 in Fig. 3 seems a bit unfair in that the learning curve ends at 8000 iterations, while PCD-1 continues to improve NLL ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_26 I would like to see this curve extended until we start to see signs of overfitting ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_27 Perhaps PCD-1 results in performance that is far better than AdVIL ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_28 I would also like to see a comparison to CD-k, which often outperforms PCD-k ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_29 While I understand the stance taken by the authors that these methods leverage the tractability of the conditional distributions, these strategies are sufficiently general to be considered widely applicable and a true competitor for AdVIL ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_30 With respect to Deep Boltzmann Machine (DBM), I would prefer to see quantitative comparisons against published results ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_31 Here again, MNIST would be a useful dataset ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_32 It seems as though, in the application of AdVIL to the DBM, the authors are exploiting the structure of the model in how they define their sampling procedure ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_33 Is that the case? ['non', 'non', 'non', 'non', 'non'] paper quality +iclr20_526_3_34 More detail for this application of AdVIL would be nice ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_35 Also, I would like to see the test estimated NLL (via AIS) learning curves for VCD and AdVIL ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_526_3_36 Given the comparison to PCD in the RBM setting, I am somewhat surprised that AdVIL is so competitive with VCD in the case of the DBM . ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non'] paper quality +iclr20_57_3_0 This paper is aimed at tackling a general issue in NLP: Hard-negative training data (negative but very similar to positive) can easily confuse standard NLP model. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_57_3_1 To solve this problem, the authors first applied distant supervision technique to harvest hard-negative training examples and then transform the original task to a multi-task learning problem by splitting the original labels to positive, hard-negative, and easy-negative examples. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_57_3_2 The authors consider using 3 different objective functions: L1, the original cross entropy loss; L2, capturing the shared features in positive and hard-negative examples as regularizer of L1 by introducing a new label z; L3, a three-class classification objective using softmax. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_57_3_3 This authors evaluted their approach on two tasks: Text Classification and Sequence Labeling. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_57_3_4 This implementation showed improvement of performance on both tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_57_3_5 Strenghts: + the paper proposes a reasonable way to try to improve accuracy by identifying hard-negative examples + the paper is well written , but it would benefit from another round of proofreading for grammar and clarity Weaknesses: - performance of the proposed method highly depends on labels of hard-negative examples ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_57_3_6 The paper lacks insight about a principled way to label such examples, the costs associated with such labeling, and impacts of the labeling quality on accuracy ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_57_3_7 The experiments are not making a convincing case that similar improvements could be obtained on a larger class of problems ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_57_3_8 The objective function L3 is not well justified ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_57_3_9 It would be important to see if the proposed method is also beneficial with the state of the art neural networks on the two applications ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_57_3_10 Table 3 (text classification result) does not list baselines ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_0 While this paper has some interesting experiments ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_720_2_1 I am quite confused about what exactly the author are claiming is the core contribution of their work ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_2 To me the proposed approach does not seem particularly novel and the idea that hierarchy can be useful for multi-task learning is also not new ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_3 While it is possible that I am missing something, I have tried going through the paper a few times and the contribution is not immediately obvious ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_4 The two improvements in section 3.2 seem quite low level and are only applicable to this particular approach to hierarchical RL ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_5 Additionally, it is very much not clear why someone, for example, would select the approach of this paper in comparison to popular paradigms like Option-Critic and Feudal Networks ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_6 "The authors mention that Feudal approaches ""employ different rewards for different levels of the hierarchy rather than optimizing a single objective for the entire model as we do.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_720_2_7 Why reward decomposition at the lower levels is a problem instead of a feature isn't totally clear, but this criticism does not apply to Option-Critic models ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_8 "For Option-Critic models the authors claim that ""Rather than the additional inductive bias of temporal abstraction, we focus on the investigation of composition as type of hierarchy in the context of single and multitask learning while demonstrating the strength of hierarchical composition to lie in domains with strong variation in the objectives such as in multitask domains.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_720_2_9 First of all, I should point out that [1] looked at applying Option-Critic in a many task setting and found both that there was an advantage to hierarchy and an advantage to added depth of hierarchy. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_720_2_10 Additionally, it is well known that Option-Critic approaches (when unregularized) tend to learn options that terminate every step [2]. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_720_2_11 So, if you generically apply Option-Critic, it would in fact be possible to disentangle the inductive bias of hierarchy from the inductive bias of temporal abstraction by using options that always terminate. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_720_2_12 In comparison to past frameworks, the approach of this paper seems less theoretically motivated ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_13 It certainly does not seem justified to me to just assume this framework and disregard past successful approaches even as a comparison ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_14 While the experiments show the value of hierarchy , they do not show the value of this particular method of creating hierarchy ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_15 The feeling I get is that the authors are trying to make their experiments less about what they are proposing in this paper and more about empirical insights about the nature of hierarchy overall ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_16 If this is the case, I feel like the empirical results are not novel enough to create value for the community and too tied to a particular approach to hierarchy which does not align with much of the past work on HRL ['non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_720_2_20 "2] ""When Waiting is not an Option: Learning Options with a Deliberation Cost"" Jean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_727_1_0 The authors propose a method for learning models for discrete events happening in continuous time by modelling the process as a temporal point process. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_727_1_1 Instead of learning the conditional intensity for the point process, as is usually the case, the authors instead propose an elegant method based on Normalizing Flows to directly learn the probability distribution of the next time step ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_727_1_2 "To further increase the expressive power of the normalizing flow, they propose using a VAE to learn the underlying input to the ""Flow Module""." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_727_1_3 They show by means of extensive experiments on real as well as synthetic data that their approach is able to attain and often surpass state of the art predictive models which rely on parametric modelling of the intensity function ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_727_1_4 The writers have put their contributions in context well and the presentation of the paper itself is very clear ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_727_1_5 "Though the final proof is in the pudding, and the addition of the VAE to model the base distribution yields promising results , the only justification for it in the paper is to create a more ""expressive"" model" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_727_1_6 There are multiple ways of increasing the expressiveness of the underlying distribution: moving from RNNs to GRU or LSTMs, increasing the hierarchical depth of the recurrence by stacking the layers, increasing the size of the hidden state, more layers before the output layer, etc. A convincing justification behind using a VAE for the task seems to be missing ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_727_1_7 Also, using the VAE for a predictive task is a little unusual ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_727_1_8 Another, relatively small point which the authors glance over is the matter of efficient training ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_727_1_9 The Neural Hawkes model suffers from slow training because of the inclusion of a sampling step in the likelihood calculation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_727_1_10 I believe that since the model proposed by the authors allows easy back-propagation, their model ought to be easy and fast to train as well. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_727_1_11 Including the training time for the baselines, as well as the method proposed by the authors, will help settle the point. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_727_1_12 Minor point: - The extension of the method to Marked Temporal Point Processes in the Evaluation section seems out of place, esp. ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_727_1_13 after setting up the expectation that the marks will not be modelled initially, up till footnote 2 on page 7 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_76_2_0 In order to rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, this paper introduces the Teacher-Student framework for kernels. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_76_2_1 In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_76_2_2 Theresults quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on sample numbers. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_76_2_3 The paper is well written , tghe major issue of this paper is the lack of comparison with other previous methods ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_76_2_4 Therefore, the efficacy of the proposed model can not be well demontrated ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_855_3_0 This paper presents an emprical study of how a properly tuned implementation of a model-free RL method can achieve data-efficiency similar to a state-of-the-art model-based method for the Atari domain. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_1 The paper defines r as ratio of network updates to environment interactions to describe model-free and model-based methods, and hypothesizes that model-based methods are more data-efficient because of a higher ratio r. To test this hypothesis, the authors take Rainbow DQN (model-free) and modify it to increase its ratio r to be closer to that SiMPLe (model-based). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_2 Using the modified verison of Rainbow (OTRainbow), the authors replicate an experimental comparison with SiMPLe (Kaiser et al, 2019), showing that Rainbow DQN can be a harder baseline to beat than previously reported (Figure 1). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_3 This paper raises an important point about empirical claims without properly tuned baselines, when comparing model-based to model-free methods, identifying the amount of computation as a hyperparameter to tune for fairer comparisons. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_4 I recommend this paper to be accepted only if the following issues are addressed. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_5 The first is the presentation of the empirical results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_6 In Figure 1, OTRainbow is compared against the reported results in (Kaiser et al, 2019), along with other baselines, when limiting the experience to 100k interactions. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_7 Then, in Figure 2, human normalized scores are reported for varying amounts of experience for the variants of Rainbow, and compared against SiMPLe with 100k interactions, with the claim that the authors couldn't run the method for longer experiences. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_8 Unless a comparison can be made with the same amounts of experience, I don't see how Figure 2 can be interpreted objectively ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_855_3_9 In any case, the results in Figure 1 and the appendix are useful for showing that the baselines used in prior works were not as strong as they could be ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_855_3_10 The second has to do with the interpretation of the results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_11 The paper chooses a single method class of model-based methods to do this comparison, namely dyna-style algorithms that use the model to generate new data. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_12 But models can also be used for value function estimation (Model Based Value Expansion) and reducing gradient variance(using pathwise derivatives). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_13 The paper is written as if the conclusions could be extended to model-based methods in general. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_14 Can we get the same conclusions on a different domain where other model-based methods have been successful; e.g. continuous control tasks? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_855_3_15 A way to improve the paper would be to make it clear from the beginning that these results are about Dyna-style algorithms in the Atari domain . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr20_880_2_0 This paper is extremely interesting and quite surprising ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_880_2_1 In fact, the major claim is that using a cascade of linear layers instead of a single layer can lead to better performance in deep neural networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_2 As the title reports, expanding layers seems to be the key to obtain extremely interesting results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_3 Moreover, the proposed approach is extremely simple and it is well explained in Section 2 with equations (1) and (2). ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_4 This paper can have a tremendous impact in the research in deep networks if results are well explained ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_880_2_5 However, in its present form, it is hard to understand why the claim is correct ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_6 In fact, the model presented in the paper has a major obscure point ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_7 Equation (1) and (2) are extremely clear ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_880_2_8 Without non-linear functions, equations (1) and (2) describe a classical matrix factorization like Principal Component Analysis. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_9 Now, if internal matrices have more dimensions of the rank of the original matrix, the product of the internal matrices is exactly the original matrix. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_10 Whereas, if internal matrices have a number of dimensions lower than the rank of the original matrix, these matrices act as filters on features or feature combination. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_11 Since the authors are using inner matrices with a number of dimensions higher than the number of dimensions of the original matrix, there is no approximation and, then, no selection of features or feature combinations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_12 Hence, without non-linear functions, where is the added value of the method ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_13 How the proposed method can have better results. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_14 There are some possibilities, which have not been explored : 1) the performance improvement derives from the approximation induced by the representation of float or double in the matrices. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_15 The approximation act as the non-linear layers among linear layers. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_16 2) the real improvement seems to be given by the initialization which has been obtained by using the non-linear counterpart of the expansion; to investigate whether this is the case, the model should be compared with a compact model where the initialization is obtained by using the linear product of the non-linear counterpart of the expanded network ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_17 If this does not lead to the same improvement, there should be a value in the expansion ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_18 3) the small improvement of the expanded network can be given by the different initialization. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_19 In fact, each composing matrix is initialized randomly. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_20 The product of a series of randomly initialized matrices can lead to a matrix that is initialized with a different distribution where, eventually, components are not i.i.d.. To show that this is not relevant, the authors should organize an experiment where the original matrix (in the small network) is initialized with the dot product of the composing matrices ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_21 The training should be done by using the small network ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_880_2_22 If results are significantly different, then the authors can reject the hypothesis. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_880_2_23 If the authors can reject (1), (2) and (3), they should find a plausible explaination why performance improves in their experiments ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_934_1_0 This paper proposed a dual graph representation method to learn the representation of nodes in a graph. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_934_1_1 In particular, it learns the embedding of paired nodes simultaneously for multiple times, and use the mean values as the final representation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_934_1_2 The experimental result demonstrates some improvement over existing methods. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_934_1_3 Overall, the idea is presented clearly and the writing is well structured ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +iclr20_934_1_4 But the novelty is limited ['non', 'con', 'con', 'con', 'con'] paper quality +iclr20_934_1_6 The proposed method is very similar with the unsupervised GraphSAGE , which also optimizes Eq.(7). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_934_1_7 The difference is that the proposed method learns a multi-channel representation and uses the attention technique to aggregate the multi-channel representation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +iclr20_934_1_8 Thus, the novelty is incremental ['non', 'non', 'con', 'con', 'con', 'con'] paper quality +iclr20_934_1_10 Since the proposed method uses the multi-channel representation, how to set the number of channels pseudo-formula ? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +iclr20_934_1_11 How does this parameter affect the performance ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +iclr20_934_1_13 Some unsupervised network embedding baseline methods, such as DeepWalk and Node2Vec, should be included into the experiment section . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_13_2_0 This paper presents a method for the instrument recognition task from laparoscopic images, using two generators and two discriminators to generate images which are then presented to the network to classify surgical gestures. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_1 The method introduces a self attention mechanism using weakly supervised labels, thereby avoiding the need to use more exhaustive annotations such as segmentations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_2 This is an important advantage for leveraging hundreds of recorded cases without having available segmentations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_3 Overall a clearly written paper, with nice visual results ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_13_2_4 Mainly an incremental paper, proposing a combination of well established GAN-based networks to accomplish a classification task ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_13_2_5 The different loss functions are all based on previously proposed approaches and exploited in this case for this dual background/foreground problem. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_6 The presented evaluation is limited , with training done on only 8 datasets, which in this particular case is a limitation due to the importance of presenting the networks with different backgrounds from various surgical sites and perspectives during surgery. ['con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_7 Indeed the critical factor is not to capture the instrument's appearance but rather model how variable the anatomical environment is. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_8 A more complete evaluation with different surgical scenarios would be needed to demonstrate this feature. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_9 Quantitative assessment is fairly limited, and yielding underwhelming results compared to individual networks (ex. CycleGAN). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_13_2_10 It would be interesting to have the author's point of you on the less than optimal results, and how they plan to improve it. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_0 The authors present a deep learning method for fundus image analysis based on a fully convolutional neural network architecture trained with an adversarial loss. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_1 The method allows to detect a series of relevant anatomical/pathological structures in fundus pictures (such as the retinal vessels, the optic disc, hemorrhages, microaneurysms and soft/hard exudates). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_2 This is important when processing these images, where anatomical and pathological structures usually share similar visual properties and lead to false positive detections (e.g. red lesions and vessels, or bright lesions and the optic disc). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_3 The adversarial loss allows to leverage complementary data sets that do not have all the regions of interest segmented. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_4 Thus, it is not necessary to have all the classes annotated in all the images but to have the labels at least in some of them. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_5 The contribution is original in the sense that complementing data sets is a really challenging task, difficult to address with current available solutions ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_14_2_6 The strategy proposed to tackle this issue is not novel as adversarial losses have been used before for image segmentations. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_7 However, it is the first time that it is applied for complementing data sets and have some interesting modifications that certainly ensures novelty in the proposal ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_14_2_8 The paper is well written and organized, with minor details to address in this matter (see CONS). ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_9 The clear contribution of the article is, in my opinion, the ability to exploit complementary information from different data sets. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_10 Taking this into account, I would suggest the authors to incorporate at least one paragraph in Related works (Section 2) describing the current existing approaches to do that. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_11 It is not clear from the explanation in Section 3.1 how the authors deal with the differences in resolution between DRIVE and IDRID data sets ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_12 "It would be interesting to know that aspect, as it is crucial to allow the network to learn to ""transfer"" its own ability for detecting a new region from one data set to another." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_13 The segmentation architecture does not use batch normalization. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_14 Is there a reason for not using it? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_15 The vessel segmentation performance is evaluated on the DRIVE data set. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_16 Despite the fact that this set has been the standard for evaluating blood vessel segmentation algorithms since 2004, the resolution of the images is extremelly different from the current ones ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_17 There are other existing data sets such as HRF (pseudo-url), CHASEDB1 (pseudo-url) and DR HAGIS (pseudo-url) with higher resolution images that are more representative of current imaging devices. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_18 I would suggest to incorporate results on at least one of these data sets to better understand the behavior of the algorithm on these images. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_19 The area under the ROC curve is not a proper metric for evaluating a vessel segmentation algorithm due to the class imbalance between the TP and TN classes (vessels vs. background ratio is around 12% in fundus pictures) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_20 I would suggest to include the F1-score and the area under the Precision/Recall curve, instead , which have been used already in other studies (see [1] and [2], for example, or Orlando et al. 2017 in the submitted draft). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_21 The method in [2] should be included in the comparison of vessel segmentation algorithms ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_22 To the best of my knowledge, it has the highest performance in the DRIVE data set compared to several other techniques. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_23 It would also be interesting to analyze the differences in a qualitative way , as in Fig. 3 (b). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_24 The authors of [2] provided a website with all the results on the DRIVE database (pseudo-url), so their segmentations could be taken from there. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_25 The results for vessel segmentation in IDRID images do not look as accurate as those in the DRIVE data set ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_26 However, since IDRID does not have vessel annotations, it is not possible to quantify the performance there. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_27 It would be interesting to simulate such an experiment by taking an additional data set with vessel annotations (e.g., some of those that I suggested before, HRF, CHASEDB1 or DR HAGIS) and evaluate the performance there, without using any of their images for training ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_28 That would be equivalent to assume that the new data set(s) does (do) not contain the annotations, and will allow to quantify the performance there. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_29 Since the HRF data set contains images from normal, glaucomatous and diabetic retinopathy patients, I would suggest to use that one. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_30 A similar experiment can be made using other data sets with red/bright lesions (e.g. e-ophtha, pseudo-url) or optic disc annotations (e.g. REFUGE database, pseudo-url). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_31 I think this is a key experiment, really necessary to validate if the method is performing well or not ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_32 I would certainly accept the paper is this experiment were included and the results were convincing. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_33 It is not clear if the values for the existing methods in Table 2 correspond to the winning teams of the IDRID challenge ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_34 Please, clarify that point in the text. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_35 The abstract should be improved ['con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_36 The first 10 lines contains too much wording for a statement that should be much easier to explain ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_14_2_37 I would suggest reorganizing these first line by following something like: (i) Despite the fact that there are several available data sets of fundus pictures, none of them contains labels for all the structures of interest for retinal image analysis, either anatomical or pathological. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_38 ii) Learning to leverage the information of complementary data sets is a challenging task. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_39 "iii) Explanation of the method... [1] Zhao, Yitian, et al. ""Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_42 Imaging 34.9 (2015): 1797-1807. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_43 "2] Maninis, Kevis-Kokitsi, et al. ""Deep retinal image understanding.""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_44 International Conference on Medical Image Computing and Computer-Assisted Intervention. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_14_2_45 Springer, Cham, 2016. ['non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_25_3_0 The paper is well written and describes an interesting and relatively novel approach to solving multi-class classification in a clinical domain where overlap between classes is frequently a possibility ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_25_3_1 The approach is clearly explained and the results presented are sufficient to give merit to the idea ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_25_3_2 The authors could spend a little more effort on explaining the intuition behind conditional versus unconditional labels and the advantages of each ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_25_3_3 Only a single (large) dataset is used, while there are many publicly available datasets that could be included for additional experiments ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_25_3_4 No public implementation of the method is provided, which would be a nice extra ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_36_2_0 The paper is well-written, and easy to read and understand ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_36_2_1 The authors consider the problem of nuclei detection, and propose to decompose the task into three subtasks, trying to predict the confidence map, localization map and a weight map. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_36_2_2 I think the effort of disentangling a complicated task into simpler ones makes sense , and the experiments have shown promising results ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_36_2_3 In my view, the proposed methods are not completely novel , I think the authors are suggested to cite them, just name a few. ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_36_2_4 "Predicting the confidence map with fully convolutional networks was initially done by : ""Microscopy Cell Counting with Fully Convolutional Regression Networks"", W. Xie, J.A." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_36_2_5 Noble, A. Zisserman, In MICCAI 2015 Workshop. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_36_2_6 "The proposed localisation map is actually the result of distance transform, and has been initially used in : ""Counting in The Wild"", C. Arteta, V. Lempitsky, A. Zisserman, In ECCV 2016." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_0 This paper attempt to do nuclei segmentation in a weakly supervised fashion, using point annotations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_1 The paper is very well written and easy to follow ; figure 1 does an excellent job at summarizing the method ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_40_3_2 The idea is to generate two labels maps from the points: a Voronoi partitioning for the first one, and a clustering between foreground, background and neutral classes for the second. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_3 Those maps are used for training with a partial cross-entropy. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_4 The trained network is then fine tuned with a direct CRF loss, as in Tang et al. Evaluation is performed on two datasets in several configurations (with and without CRF loss, and variation on the labels used) ; showing the effects of the different parts of the method. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_5 The best combination (both labels + CRF) are close or on par with full supervision. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_6 The authors also compare the annotation time between points, bounding boxes and full supervision, which really highlight the impact of their method (x10 speedup) ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_40_3_7 Few questions: - Since the method is quite simple and elegant , I expect it could be adapted to other tasks. ['non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_8 Do you have any ideas in mind ? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_9 "How resilient is the method to ""forgotten"" nuclei ; i.e. nucleus without a point in the labels ?" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_40_3_10 Could it be extended to work with only a fraction of the nuclei annotated ? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_40_3_11 Is using a pre-trained network really helping ? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_40_3_12 Since there is so much dissimilarity between ImageNet and the target domains, I expect it to be mostly a glorified edge detector. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_13 It is improving the final performances, speeding up convergence, both ? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_14 Minor improvements for the camera ready version, in no particular order: Tang et al. 2018 was actually published at ECCV 2018, the bibliographic entry should be updated. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_15 Section 2.3 should make the differences (if any) with Tang et al. explicit ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_40_3_16 Those three papers should be included in the state-of-the-art section : - Constrained convolutional neural networks for weakly supervised segmentation, Pathak et al., ICCV 2015 - DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks, Rajchl et al., TMI, 2016 - Constrained-CNN losses for weakly supervised segmentation, Kervadec et al., MIDL 2018 Since the AJI and object-level Dice are not standard and introduced in other papers, it would be easier to put their formulation back in the paper, so the reader does not have to go look for it. ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_40_3_17 Replacing (a), (b), ... by Image, ground truth, ... in figures 2, 3, and 4 would improve readability. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_41_1_0 To investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_41_1_1 To perform image-to-image translation from multi-modal CT perfusion maps to di usion weighted MR outputs To make use of generated MR data inputs to perform ischemic stroke lesion segmentation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_41_1_2 There is no detail on qualitatively visual comparison of generated MR to ground truth ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_41_1_3 The authors had better compare segmentation result between CTP with orginal MRI and CTP with CGAN MRI ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_41_1_4 The gain using CGAN MRI looks marginal, which would be better to apply ablation study . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_49_1_0 This paper presents a clustering method using deep autoencoder for aortic value shape clustering. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_1 It is the first work to identify aortic value prosthesis types using a general representation learning technique. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_2 This work has a remarkable clinical value ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_49_1_3 Clustering of aortic value prosthesis shapes has a high contribution to personalized medicine ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_49_1_4 The entire workflow is quite clear and complete ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_49_1_5 The introduction part is a little misleading for me ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_6 The authors emphasize that the objective is to cluster the geometric shape of leaflets, and it is hard to represent the shapes in high-dimensional space (last paragraph of introduction). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_7 I'm concerned that this would make the readers misunderstand the data are shape-models (point cloud dataset) before the description of dataset in Sec. 2. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_8 One major concern is whether the results are reliable : 1. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non'] paper quality +midl19_49_1_9 The experiments shown in Table 1 compare several different network settings. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_10 This kind of vertical comparison is insufficient to support the claims made in the study ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_11 Please compare to other representation learning methods such as sparse coding (e.g. spherical K-means, dictionary learning), dimension reduction (e.g. PCA, t-sne). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_13 This study did not give a gold-standard for shape clustering (though it could be difficult). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_14 The experiments measure the recon accuracy. ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_15 However, recon accuracy highly depends on decoder network. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_16 It is not convincing to claim that the clustering is correct since even a noise can be decoded into a normal image ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_17 In the last paragraph of the introduction, authors say 'it is hard to define a feasible metric describing the similarity of the valve shape in general.'. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_18 However, authors use Jaccard coef. ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_19 and Hausdorff distance to measure the recon accuracy between original image and reconstructed image. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_20 It is a self-contradictory statement. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_21 other comments: - The authors use 2D images to represent leaflet shapes, I'm concerned whether 2D photograph is precise enough ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_22 3D scanner such as CT, MRI, optical scanner could be more suitable for this work ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_23 Though this is not the issue to be considered in this work. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_49_1_24 The paper is not well organized ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_25 Details of training should be more clearly written ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_26 The hyper-parameters of autoencoder and the recon decoder should be more clearly stated for reproducibility ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_49_1_27 All architectures listed in Table 1 should be stated clearly in experiments section not only in method section . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_51_1_0 The paper presents an approach to aid interpretation of pathology images coming from confocal microscopes (CM images). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_1 The clinical value of CM images has been highlighted in previous work, but although effective towards the goal of detecting the presence of cancer, these images are hard to interpret by humans. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_2 The authors propose to use a cycle-GAN to shift the distribution of CM images towards more standard H&E images which are easier to interpret. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_3 They present an architecture making use of two network, a de-noise/de-speckle network (trained independently on one of the two types of CM images used in this work) followed by a generative network (cycle gan). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_4 The general organization of the paper is sound This paper tackles a problem that is relevant to the whole medical community ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_1_5 It has the potential to improve pathology and cancer diagnosis by making it simpler and quicker The results of this work look visually convincing ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_1_6 Both the de-speckle network and the GAN appear to deliver very good results, at least at first glance ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_1_7 The quantitative results delivered by the de-speckling images, which seem to be computed using simulated realization of random speckle noise, look also convincing ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_1_8 I agree with the authors statement in the end of the paper where they say they could train both GAN and de-speckle network end to end ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_9 I think this joint training might result in even better outcomes. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_10 The study has potential and could have interesting applications in clinical settings ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_1_11 One issue, from a purely organizational standpoint, is the fact that information about previous work is either omitted or scattered around the text ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_12 I understand that the available space is limited and therefore it's difficult to bring in the paper all the information that would be necessary, but the introduction should be extended to include previous work both in terms of DL and medical research ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_13 This paper still represent a niche application of a more general DL technique that has been already used for a large number of similar applications ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_14 The contribution is therefore incremental, building on top of well-known techniques ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_15 "After the publication at MICCAI 2019 of the work ""Distribution Matching Losses Can Hallucinate Features in Medical Image Translation"" and similar other works, it has started becoming apparent that the simple visual similarity between samples generated by a GAN and true samples from a specific distribution doesn't ensure that diagnostic value is kept." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_16 This doesn't mean that cycle-GAN type of techniques are not suited for medical imaging since they might wipe out their diagnostic value, but it means that every study around this topic needs to prove that the diagnostic value is indeed kept! ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_17 Unfortunately the authors didn't report indications in this sense in their paper ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_18 "The main contribution of the paper is scarcely justified by the statement ""...they confirmed that the images were similar to those in routine""" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_19 I feel it would have been extremely interesting to evaluate the performance of those same clinicians (and others) diagnosing cancer using both H&E stained image and CM images of the same patient (or patient distributions) vs a control group. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_1_20 A lengthy study, I agree, but a necessity in light of other recent works highlighting how dangerous is to use GANs for this kind of tasks ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_21 The choice de-speckle network architecture is somewhat not sound, with the multiplicative residual connection near the outputs of the network and the median filtering operation ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_22 Is there some reference for multiplicative residual connections ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_23 How do we know that the network is learning 1/F (inverse of speckle noise) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_24 Can we prove that at least visually ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_25 Is the math right ['con', 'con', 'con', 'con'] paper quality +midl19_51_1_26 It is necessary to prove that the generated images retain their important diagnostic value ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_1_27 It is necessary to run a study to confirm that in a similar way that CM images were confirmed having diagnostic value and could therefore be used instead of H&E stained images ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_0 The authors combine DL and computer vision methods to digitally stain confocal microscopy images to generate H&E like images. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_2_1 The aim for this work is to provide an image that is familiar to the pathologists such that it will remove the need for specific training for CM interpretation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_51_2_2 Pros: 1- If this approach is accepted by the community, it could remove the need for additional training to the pathologists ['non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_2_3 This will potentially bring us closer to rapid evaluation of lesions during surgical operation using fast CM ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_2_4 2- Two step approach combining despeckling and generative networks are reasonable for the task ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_51_2_5 3- Qualitative stained image results look promising Cons: 1- Median filter is used after the despeckling network, however it is not clear the added benefit of using median filter in despeckling process ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_6 Error measures presented in Table 1 needs to help readers to identify the benefit of the proposed neural network ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_7 The authors should validate their selection of two step approach (NN + filter) compared to an end-to-end FCN (with an additional loss like TV) for the despeckling network ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_8 2- It is not clear why the histology images were used for denoising network training ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_9 Even though it is mentioned by the authors that these images resemble to noisy RCM, this should be either referenced or shown ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_10 3- Please provide an evidence to support the positive effect of choosing an augmentation of size 512x512 after 50 epochs in Section 3.2. ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non'] paper quality +midl19_51_2_11 4- The authors conclude that the despeckling NN is crucial to obtain realistic images, however, the results presented in Figures 8 and 9 do not provide enough information to support this conclusion ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_12 For example, it is not clear what are the non-desirable artifacts, where are the eliminated nuclei and why the network has a harder time to learn ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_13 The authors should provide support to these conclusions ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_14 For instance, Figure 9 needs to use the same images presented in Figure 8 to provide enough support for the need of despeckling network ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_15 In addition, images representing eliminated nuclei using noisy RCM images should be presented with their counterpart using despeckling network ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_16 5- Obtaining quantitative comparison results for staining accuracy is not feasible due to the reasons clearly defined by the authors ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_17 It is necessary to provide more qualitative information regarding the staining results in addition to confirmation from two expert pathologists ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_18 Please provide results of the inter-rater reliability of two pathologists using a point scale on the quality of image digital staining ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_19 6- I suggest the authors to use train validation and test split or a cross-validation, since the results presented here are from a validation set without a test set ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_51_2_20 This could potentially add a bias to the results presented here. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_0 This paper proposes to use a CNN architecture to reconstruct MR Fingerprinting parametric maps. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_1 The authors test their algorithm on a dataset of 95 subjects for neuromuscular disease. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_2 They compare their method with two state of the art deep learning methods and illustrate superior performance on NRMSE, PSNR, SSIM and R2 metrics. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_3 Moreover, they have done some ablation studies to show the importance of the receptive field and temporal frames for MRF reconstruction. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_4 I believe the experiments are thorough and well designed to back the claims of the paper ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_52_2_5 The utilized network architecture can be better explained with an emphasis on specific design choices ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_6 1- This paper is well written and the message is clear to the reader ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_52_2_7 2- The extensive tests on a real dataset instead of phantom cases is definitely a strength of the paper ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_52_2_8 3- The description of the network architecture is not clear for the reader ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_9 How does the temporal and spatial blocks work ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_10 They seem to work in different dimensions of the signals. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_11 Even though the authors explain the details in the text I believe an additional illustration in each block (maybe in Appendix) might be helpful to reproduce the method in the paper for further research ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_12 4- How does the specifics of the network architecture influence the performance ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_13 Why do the authors reuse the input of a temporal block to its output and how does this influence the performance ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_14 5- How is the complex component of the signal concatenated into a channel ? ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_52_2_15 Does the order of concatenation influence the results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_16 Did the authors considered to utilize complex valued networks for this task ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_17 6- The quantitative results are yielded using multiple segmentation masks due to MR physics related concerns ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_18 Are the results on Table 1 heavily dependent on use of these masks ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_19 Are the results on the entire parametric maps in line with the current results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_20 7- What is the number of parameters required for each method in Table 1? ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_21 The reason for high performance of the proposed method can be explained with the required number of parameters to train the method. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_22 Please elaborate on this. ['non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_23 8-The lack of scalability and the requirement of computational time is highlighted in the introduction and abstract. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_24 However, no quantitative comparisons are provided ['non', 'non', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_52_2_25 I believe the computational time can be added for each method in Table 1. ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_26 Minor suggestions a- Some recent work on using the complex-valued neural networks (Virtue Patrick et al., arxiv), geometry of deep learning (Golbabaee et al., arxiv)and recurrent neural networks (Oksuz et al.,arxiv) for MRF dictionary matching can be mentioned in the literature review with their strengths and weakneses. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_27 b- Please explain (a.u.) ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_52_2_28 term in Fig.2. ['non', 'non', 'non', 'non'] paper quality +midl19_52_2_29 c- Quantitative results can be mentioned in the abstract . ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non'] paper quality +midl19_56_3_0 Summary: Authors present AnatomyGen, a CNN-based approach for mapping from low-dimensional anatomical landmark coordinates to a dense voxel representation and back, via separately trained decoder and encoder networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_1 The decoder network is made possible by a newly proposed architecture that is based on inception-like transpose convolutional blocks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_2 The paper is written clearly ['pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_56_3_3 Methods, materials and validation are of a sufficient quality ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl19_56_3_4 There are certain original aspects in this work (latent en-/decoding, inception-based decoder network, latent space interpolation, generalization to previously unseen shapes etc.), but the work may not be as original as authors suggest, since they may not be aware of a very similar work (see Cons), where some of the discussed concepts have already been proposed and explored ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_56_3_5 Authors explicitly that the work is not intended for segmentation, but many previous shape modeling works (including SSMs) were used as regularization in segmentation ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_56_3_6 Authors could comment on how their model could be incorporated into (e.g. deep) segmentation approaches, because I do not see an immediate way to do that without requiring the (precise) image-based localization of mandible landmarks in a test volume. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_7 "I would recommend weakening or at least toning down certain ""marketing"" claims like ""3 times finer than the highest resolution ever investigated in the domain of voxel-based shape generation"", or ""the finest resolution ever achieved among voxel-based models in computer graphics""." ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_8 First, it is not fully clear where this number 3 comes from , and second, the quality of the work speaks for itself ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_56_3_9 Further, there is always the chance that authors are not aware of every piece of related literature (in all of computer graphics), as it might be the case here. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_10 "Authors claim to introduce many concepts for the first time , such as the ""first demonstration that a deep generative architecture can generate high fidelity complex human anatomies in a [...] voxel space [from low-dimensional latents]""." ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_11 However, I am aware of at least one work where such concepts have been proposed and explored already ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_56_3_12 CNN-based shape modeling and latent space discovery and was realized for heart ventricle shapes with an auto-encoder, and integrated into Anatomically Constrained Neural Networks (ACNNs) [1]. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_13 Their voxel resolution is only sligthly smaller than in this work (120x120x40), with a similar latent dimensionality (64D, here: 3*29=87). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_14 "Smooth shape interpolation by traversal of the latent space was also demonstrated, and some of their latents also corresponded to reasonable variations in anatomical shape, without being ""restricted"" to statistical modes of variation as discussed here." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_15 Compared to the proposed work, where latents represent clinically relevant mandible landmarks, an auto-encoder approach as in ACNN is more general: relevant landmarks as in the mandible cannot be identified for arbitrary anatomies , and a separate training of decoder and decoder as proposed here crucially depends on a semantically meaningful latent space with a supervised mapping to the dense representation (e.g. hand-labeled landmarks vs. voxel labelmaps). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_16 In contrast, ACNN auto-encoders train their encoder and decoder in conjunction. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_17 How do authors suggest to apply their approach to anatomies where it is impossible (in terms of feasibility and manual effort) to place a sufficiently large number of unique landmarks on the anatomy (e.g. smooth shapes, such as left ventricle in ACNN)? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_56_3_18 "Authors suggest that their solution ""is not constrained by statistical modes of variation"", as e.g. by PCA-based SSM methods." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_56_3_19 "While I agree that the linear latent space assumption of PCA is too simplistic and the global effect of PCA latents on the whole shape often undesirable, the ordering of latents according to ""percent of variance explained"" is actually desirable in terms of interpretability" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_56_3_20 1] Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, et al. Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_0 Transfer learning and dealing with small datasets is an important area of research - The paper proposes a novel method, enabling pretraining on several different tasks instead of only one dataset (e.g. ImageNet) like done most of the times - Results show clear performance increase on small datasets - Proper experiment setup and validation - Clearly written and comprehensible - Code is openly available - Little comparison to other state-of-the-art methods for transfer learning ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_1 Only compared to IMM which is very similar to the proposed T-IMM ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_2 Comparison to (unsupervised) domain adaptation methods would also have been interesting (e.g. gradient reversal (Ganin et al. 2014, Kamnitsas et al. 2016)). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_3 Method only evaluated on one dataset (BRATS). ['con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_4 "Often new methods are manually ""overfitted"" to one dataset." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_5 When used on another dataset they do not show gains anymore. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_6 The medical decathlon (pseudo-url) would have provided easy access to more datasets and tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_7 Minor: - Testing for statistical significance is only shown in the appendix ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_8 "It shows that for ""100%"" T-IMM actually is not significantly better than most of the other initialization strategies" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_9 This should also be shown in table 2 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_10 "The way table 2 is presented at the moment it seems like T-IMM is better than all methods also for ""100%""" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_11 But the higher performance is not significant ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_12 "How is training till ""convergence"" (section 4.3) defined?" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_13 Not 100% clear if the IMM method used in the experiments is the method described in section 3.2 (alpha=1/T) ? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_14 "in section 5: ""Table 2 shows, that both IMM and T-IMM...""." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl19_59_3_15 I guess this should actually be table 4 ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl19_59_3_16 Figure 1 could have been a bit more clear ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_0 Overall, the quality of the paper is fair ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_100_1_1 It is well-written, well-structured and easy to read for someone without knowledge on IVF and ART ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_100_1_2 The method is compared to five embryologists and results clearly shows that learning directly from the clinical outcome outperfoms embryologists by a large margin ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_100_1_3 The main weakness of the paper is in the methods section ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_4 The methodological novelty seems insignificant ['con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_5 Plenty of works combine autoencoders with LSTMs ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_6 I suggest you either argue for the novelty or remove the claim from the paper ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_7 The methods section lacks details for reproducing the work ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_8 These must be provided in a supplement to allow reproducability ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_9 If you want your work applied in clinics, this is much more important than improving the results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_10 In the methods section you describe training an autoencoder on unlabeled data, then training an LSTM using autoencoder embeding and embryologist grades. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_11 As I read it, UBar is the same LSTM just trained on clinical outcomes. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_12 You do not report results for the embryologist trained LSTM , so what do you use this LSTM for? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_13 If you dont use it, remove it from the section ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_14 "If you do use it, you cannot argue that you learn from ""a small number of labeled samples"" as done in the final paragraph of the paper" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_15 In the discussion you almost exclusively focus on the work by Tran et al and why comparing with that work is unfair ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_16 Instead, you should have made the comparison and highlighted the differences clearly ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_17 What is interesting is not who is better, but how, and how well, the task can be solved ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_18 You argue that including embryologists decisions in the prediction is an easier task. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_19 I am not convinced ['con', 'con', 'con', 'con'] paper quality +midl20_100_1_20 In your case, you train on data that has already been filtered to only include positive decisions by embryologists, otherwise the eggs would not have been implanted. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_21 It is not obvious how to best get around this issue, since the first embryologist screening probably has false negatives, but you need to take it into account ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_22 Your statement about AUCs and training sizes is either obviously correct or obviously wrong, depending on interpretation. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_23 The only way training size can influence AUC is by influencing the training of the model. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_24 It is quite well known that more training data, in general, results in improved performance of networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_25 This holds for all the popular performance measures ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_26 Having said that, if the model predictions does not change, then AUC does not change. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_27 Maybe you meant the size of the test set? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_28 In that case, it is the ratio of positive/negative that is relevant. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_100_1_29 Regardless, trying to paint others work negatively by arguments to some general issue with established performance metrics is disingenuous ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_30 If there is an issue with Tran et al you should state it clearly, if not, you should accept their results ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_31 A mior nitpick: You define all abbreviations except for UBar ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_32 It is fine that you give your method a name (although I personally dislike it), but a bit weird not to explain it ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_33 Finally, I would very much have liked to to see a frame from one of the videos ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_100_1_34 I am aware of the page limitation, so maybe MIDL should allow an extra page solely for an image of the raw data. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_108_3_0 In this paper, the authors aimed to improve the representations learned by Neural Image Compression (NIC) algorithms when applied to Whole Slide Images (WSI) for pathology analysis. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_108_3_1 The authors extended unsupervised NIC to a multi-task supervised system. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_108_3_2 A hard-parameters sharing network was presented with a shared, compressed representation branching out in task-specific networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_108_3_3 The authors evaluated the quality of these representations on multiple tasks, illustrating the added benefit of their multi-task system and the utility of using multiple tasks to supervised the feature extraction. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_108_3_4 This is a very well written paper ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_108_3_5 The introduction and description of the state of the art, in addition to the main limitations of popular algorithms is very clear and interesting to read ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_108_3_6 The experiments are clearly explained and the results are well presented. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_108_3_7 The decision to supervised the feature extraction in a multi-task setting is good and makes sense ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_108_3_8 Multi-task learning can extract a shared representation that is generalisable and this is evidenced in the results in the TUPAC16 set. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_108_3_9 Good and convincing results when compared to competing methods * Strong validation * It is a shame that the Kaplan-Meier estimator was not repeated for all baselines to further illustrate the strength of the multi-task features * There are many more TUPAC16 results [pseudo-url. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_108_3_10 pseudo-url] yet the presented method is benchmarked only against 3 ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_108_3_11 It would be helpful to put the results in context with all other methods such as automatic and semi-automatic methods ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_108_3_12 Moreover, is there is a reason you did not validate on all TUPAC16 tasks The is well written paper with a clear description of the state of the art and the reasoning behind the presented method ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_108_3_13 The method is well explained and the validation is strong with convincing results versus state of the art methods. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_108_3_14 The work also raises some interesting points regarding multi-task training for pathology and with further work could be a good paper ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_119_2_0 This paper proposes to add a self-expressiveness regularization term to learn a union of subspaces for image-to-image translation in medical domain. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_119_2_1 It's shown that such self-expressiveness constraint can help to preserve subtle structures during image translation, which is critical for medical tasks, such as plaque detection. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_119_2_2 The motivation and methodology are well explained with proper reference works ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_119_2_3 Improvement on plaque detection is signification ['pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_119_2_4 Comment: It would nice if the authors could also show some visualisations of the latent space, with comparisons between with and without the constraint ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_119_2_5 This will provide more insights or explanations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_127_4_0 The authors present the AF-Net, which is a U-net with three adjustments. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_127_4_1 The authors show that the AF-Net is more robust compared to the U-Net and M-Net for AFV measurement. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_127_4_2 "Main problem: The authors mention ""the AF are sonographer dependent, and its accuracy depends on the sonographer's experience" ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_127_4_3 "This paper aims to solve the above problems by..."", but the authors use 2D ultrasound images made by a sonographer, so the system therefore does not solve these problems" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_127_4_4 If a sonographer is able to acquire these images, they are also able to perform these measurements ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_127_4_5 Such a system might speed up this process ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_127_4_6 Note: the abstract is not included in the PDF ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_127_4_7 The authors also do not include a Section with a discussion ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_127_4_8 The boxplot shows that six outliers are resolved by the AF-Net, so it can be debated if that is clinically relevant to reduce (6/435=)1.4% of the errors ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_135_3_0 This paper proposes a pulmonary nodule malignancy classification based on the temporal evolution of 3D CT scans analyzed by 3D CNNs. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_135_3_1 It is an interesting idea and the quality is overall rather good for an abstract paper ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_135_3_2 Some points to address are listed in the following: The early stopping is not clear ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_135_3_3 Specify that it is on the validation set if so, and clarify these points: number of epochs was set to 150, early stopping to 10 epochs Why is this clipping used? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_135_3_4 It is not clear whether T1 and T2 is available for all cases (mostly) In Table 1, bold results are not always the best, this is very misleading ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_135_3_5 It is strange that the T1, T2 generalize well to the validation set but not to the test ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_135_3_6 Can you comment? ['non', 'non', 'non', 'non'] paper quality +midl20_135_3_7 obtained an F1-score of 0.68 -> 0.686? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_0 The authors propose a framework to utilize one model under different acquisition context scenarios. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_1 A novel dynamic weight prediction model is proposed to learn to predict the kernel weights for each convolution based on different context settings. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_2 Experiments show that the proposed method outperforms the model trained on the context-agnostic setting and acquires similar results to models trained by context-specific settings.1). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_3 The idea of learning convolution weights for different input image quality is novel ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_56_4_5 The method part is well-written and easy to understand ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_56_4_7 It conducts extensive experiments for three different settings and the results demonstrate the effectiveness of the proposed method .1). ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non'] paper quality +midl20_56_4_8 Opposite to the Method part, it's hard to read the abstract and introduction ['non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_56_4_9 Some typo problems lie here ['con', 'con', 'con', 'con', 'con'] paper quality +midl20_56_4_11 It seems that the DWP need to generate a specific weight each time. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_12 The authors do not compare the inference speed of the proposed method with others ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_56_4_14 In Table 3., the result of the proposed method is slightly higher than the CSM. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_15 There can be more discussion here.The authors propose a framework to utilize one model under different acquisition context scenarios. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_56_4_16 The method is novel with extensive experiments ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_56_4_17 Results show the effectiveness of the proposed method ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_56_4_18 But the writing needs to be improved ['non', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_56_4_19 Therefore I recommend the weak accept. ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_70_4_0 This paper presents a multi-label classification framework based on deep convolutional neural networks (CNNs) for diagnosing the presence of 14 common thoracic diseases and observations in X-rays images. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_70_4_1 The novelty of the proposed framework is to take the label structure into account and to learn label dependencies, based on the idea of conditional learning in (Chen et al., 2019) and the lung disease hierarchy of the CheXpert dataset (Irvin and al., 2019). ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_70_4_2 The method is then shown to significantly outperform the state-of-the-art methods of (Irvin and al., 2019; Allaouzi and Ahmed, 2019). ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_70_4_3 The paper reads well and the methodology seems to be interesting ['pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_70_4_4 I only regret the fact that this is a short paper , and there is therefore not enough space for a more formal description and discussion of the methodology ['non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_71_1_0 The authors proposed a 4D encoder-decoder CNN with convolutional recurrent gate units to learn multiple sclerosis (MS) lesion activity maps using 3D volumes from 2 time points. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_71_1_1 The proposed architecture connects the encoder and decoder with GRU to incorporate temporal information. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_71_1_2 It's compared to an earlier method which uses a 3D network and time-point concatenation and reports improvement in Dice scores, false positive rates and true positive rate. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_71_1_3 The improvement gained by the proposed method validates the effectiveness of recurrent units, and the most significant gain is from the false positive rates. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_71_1_4 Meanwhile, a few clarifications may be necessary: 1) in term of runtime, does the addition of GRUs take much more training time and memory comparing to the concatenation of 3D volumes? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_71_1_5 2) what is the dimension of input, is it W D or H W D$ ? ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_71_1_6 If it's the latter one, is the convolution done with a 4D filter ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_71_1_7 3) more details about the convGRU may be useful, for example its architecture. ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_71_1_8 Overall, the problem the paper tackles is critical, and the proposed network component is effective to some extent ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_71_1_9 The conclusion is more like a validation for the usefulness of the temporal information, while technical novelty may not be very sufficient in this case ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_77_4_0 This paper evaluates 5 different models for motion tracking in 4D OCT. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_1 The models are variants of that proposed in Gessert et al (2019), which is here extended in different ways to perform motion forecasting/prediction using a sequence of OCT volumes, rather than motion estimation between 2 OCT volumes. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_2 On the positive side, the extension of the Gessert model to motion forecasting seems like a useful one ['non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_77_4_3 The methods employed seem reasonable and quantitative evaluation is performed to compare them ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_77_4_4 The discussion of the results reveals findings that may well be of interest to others ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_77_4_5 However, one weakness of the paper was that the details of the experimental setup for data generation were not clear without following up the Gessert et al (2019) reference ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_77_4_6 Was the setup the same as in Gessert et al (2019), i.e. with a robot moving the object and mirrors moving the OCT FOV? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_7 Please modify the paper to make this clear. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_8 Also, can the authors comment on what the accuracy requirement is for motion tracking in OCT? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_9 Other specific suggestions: Section 2: region of interest (ROI) performing motions does not make sense to me ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_77_4_10 Maybe get rid of performing motions? ['non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_11 Section 2: In description of n-Path-CNN3D, extent should be extend Section 2 , Dataset: For data generation, we consider various smooth curved trajectories with different motion magnitudes this is a bit vague , can you provide more information? ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_12 How were these trajectories formed? ['non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_13 How big were the ROIs? ['non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_77_4_14 Section 3: combing should be combining ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_85_3_0 The key idea in the paper is to use functional prior that is completely uncertain about prediction of any class. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_85_3_1 To achieve this , the idea of introducing Dirichlet distribution after neural network is used from Evidential Deep Learning (EDL) paper. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_85_3_2 From table 1, it is clear that ECE is much lower for the proposed method ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_85_3_3 However, I have following concerns: 1. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_85_3_4 It is not clear why calibration is reported and not simple measures of uncertainty like variance or entropy ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_85_3_5 Also, I would be convinced that the variance would increase for out of distribution test samples because you used a prior that enforced uncertainty of all labels ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_85_3_6 Now, it is difficult to connect use of prior and improvement in ECE. ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_85_3_8 What is the experimental setup ['con', 'con', 'con', 'con', 'con'] paper quality +midl20_85_3_9 Did you train on some other dataset and test on skin lesion dataset ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_85_3_11 "Last line of section 1: ""it can distinguish distributional versus data uncertainties""." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_85_3_13 Overall, the idea is fine . ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'non'] paper quality +midl20_90_2_0 In this work, the authors purposed a new deep neural network architecture for detecting injuries/abnormalities in the knee. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_90_2_1 The main contribution of the work was adding a normalization step to the network, and learning the affine transformation parameters during the training. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_90_2_2 The normalization was followed by a BlurPool layer to solve the shift variance. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_90_2_3 The paper is written very well , the implementation details are provided to help reproducing the results ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_90_2_4 The method was tested on two different datasets, which is impressive ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_90_2_5 The results of the model was compared also to the state of the art.From the following sentence, I understand that for each pathology, a different model was trained. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_90_2_6 If this is true, the model is not efficient ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_90_2_7 Contrast normalization yielded the best results for detecting meniscus tears, and layer normalization for detecting the remaining pathologies.The algorithm was explained very well ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_90_2_8 The results are also very nice ['pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +midl20_90_2_9 However, if different models were trained for predicting each parameter, not only training but also prediction would not be efficient ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_0 The presented paper aims to label and remove irrelevant sequences from laparoscopic videos. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_1 This is done with manual labelling and a ResNet-18. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_2 Motivation is based on anonymisation and data cleansing. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_3 Iterative refinement is claimed to be semi-supervised learning. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_4 Several experiments are proposed and results are presented. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_5 automatic patient data anonymity and data cleansing are important topics - the results look good with a big but (see below) - this is clearly an application paper, testing well known methods in a new scenario. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_6 No effort has been made to fuse the proposed pipeline into a medical-image analysis specific methodological contribution ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_7 Why is for example the output temporally smoothed instead of using spatio-temporal consistency in higher dimensional networks? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_8 Why hasn't the semi-supervised paradigm be explored in more detail instead of only using a few biasing iterations with user input? ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_9 A radical ablation study is clearly missing here ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_10 The task itself would imply that a deep network classifier is potentially an overkill. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_11 Bluntly: surgical parts are predominantly red, non-surgical parts anything and blue/green. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_12 How would a generic linear classifier on the image histograms perform here, or perceptual hashing with a linear classifier on top? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_13 Do we really need a labelled ground truth here ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_14 Can't simple heuristics perform at least as well ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_15 Assessing in-focus will even get rid of blurred frames and frames as discussed in the Appendix. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_16 There will be domain shift problems for the simple methods but same is true for the presented method. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_17 Writing, experimental setup and methodological proposals need to be improved and condensed ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +midl20_96_3_18 I have been working in this field for many years and published papers about these topics. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_19 I am advising regulatory decision makers and do active research in clinical environments. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +midl20_96_3_20 I am advocating open data access and reproducible research. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_0 This is nice work that addresses the credit assignment problem with a meta-learning approach ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_2_2_1 The motivation needs to be a bit clearer ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_2 Is the work trying to address the credit assignment problem in general, or just when applied to online learning tasks ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_3 Either way this is important work, with many interesting future directions ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_2_2_4 The model and implementation make sense as far as I can tell from this brief submission. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_5 The theoretical results stated are nice to have ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_2_2_6 Section 1 pitches the method as solving the credit assignment problem, citing problems with weight symmetry etc, that apply to many forms of learning. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_7 But the related work in Section 2 then goes on to talk about the efficiency of backprop for solving online learning and few-shot learning tasks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_8 The efficiency of backprop should be mentioned in the intro if it is something this work is aiming to address ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_9 While much human learning may be more naturally cast as online learning, not all of it is. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_10 There may be much interest in how we learn from so few samples in certain settings, but we also learn some relationships/tasks in a classical associationist manner which is well modeled by 'slow' gradient-descent like learning (e.g. Rescorla Wagner). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_11 The credit assignment problem exists in these cases also ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_12 So I think the present work needs to be repitched slightly as solving credit assignment in an online/few shot learning setting ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_13 Or discuss how it can be extended to more general learning problems ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_14 The submission is pretty clear ['pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_2_2_15 In understanding the model, it would be useful to more explicitly define the model ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_16 For instance, how is the b at line 63 related to the activation x_i and ReLU at lines 75 and 76? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_17 There are exiting directions in both AI and neuroscience this work could be take ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_2_2_18 Seeing if these meta-learnt rules line up with previously characterized biological learning rules is particularly interesting ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_19 Define the model more explicitly ['con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_2_2_20 And emphasize that this only solves credit assignment for certain types of learning problems (at the moment) ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_23_1_0 I believe the concept of using predictive coding and unlabeled video data to train convnets is a great idea ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_23_1_1 However, the contribution of the authors does not appear to extend beyond combining existing data sets with existing network architectures ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_23_1_2 The work is lacking a discussion of the most recent work in the similarity of visual processing in convnets to brain data, which incorporate recurrence into convnets (Nayebi et al. 2018, Kubilius et al. 2018 and 2019), thereby potentially allowing for similar behavior as a PredNet. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_3 How would you expect those networks to perform when trained on unlabeled video data? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_4 It would have been useful to put these in context of the results of the algonauts contest, which pitched supervised methods such as Alexnet against user-submitted content. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_5 Does PredNet outperform other user-submitted models? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_6 For this result to be convincing, I would like to see some reasons why the authors think PredNet is outperforming previous models. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_7 For example, is there something different about the feature maps that support this ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_23_1_8 What precisely about predictive coding makes the similarity to brain data expected ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_23_1_9 Results were presented quite clearly , although datasets and methods rely entirely on previously published work, such that digging into previous work on PredNet and the Algonauts project was necessary for a full understanding ['pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_23_1_10 The question of how the visual world is represented in the brain is an essential question in neuroscience as well as for building successful machine learning techniques for artificial vision. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_11 It does not seem like predictive coding is the main thing going on in V1 (Stringer et al., Science 2019), so Id be curious how the authors think that should be taken into account in the future. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_23_1_12 Typo line 24 Moreover, we show that as (we) train the model Typo line 87 Second, the model does not rely on labeled data and learn(s) ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_26_1_0 The proposed model is essentially a constrained/specific parameterisation within the broader class of 'context dependent' models. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_26_1_1 The heavy lifting is seemingly done by well known architectures: default RNN & a feed-forward NN. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_26_1_2 While it does not seemingly add anything conceptual , the exact implementation is arguably new ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_26_1_3 The model description is nice and clear ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_26_1_4 I think a more persuasive bench marking could be done ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_26_1_5 Perhaps compare to reference models [11] or [10] rather than a 'vanilla' RNN , as this amounts to not using any prior information about the task (which, by construction, we 'know' is useful) ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_26_1_6 Also perhaps report results from one of the 2 (mentioned) more complex benchmarks ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_26_1_7 Paper is clear and quite readable ['pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_26_1_8 The paper takes a crudely 'neuroscience inspired' concept (though, admittedly it could simply be 'task structure' inspired) and builds a simple model from it, which it benchmarks on a appropriately designed simplest-working-example. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_26_1_9 So it fits well with the workshop theme ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_26_1_10 I'd say a fairly 'standard' work for the setting ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_26_1_11 Only real point for improvement is more earnest bench marking/model comparison ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_26_1_12 Authors could also add some context by considering related works in the computational neuroscience literature , e.g. Stroud et al. Nature Neurosciencevolume 21, pages 17741783 (2018) and pseudo-url (though the latter is very recent). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_29_1_0 While the question of how neural networks may act over concept space is important , I dont think the approach used by the authors correctly adress this question ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_29_1_1 The work of Hill et al. (2019) very clearly addresses these questions by devising tasks that require generalization across domains, showing how training regime is sufficient to overcome the difficulties of these tasks, even in shallow networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_29_1_2 I dont see how the current work adds more clarity to this research direction ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_29_1_3 The main point relies purely on a visual representation of the top PCs of the penultimate layer of a CNN, which I believe is insufficient ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_29_1_4 The authors should have identified a task where networks trained on MNIST perform poorly, and then propose a different strategy or architecture ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_29_1_5 Overall the writing is relatively clear , but it would have been beneficial to describe the hypotheses more explicitly, e.g. what neural activity would be expected for a place, grid, or concept representation with respect to MNIST ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_29_1_6 The question of how the brain and artificial network can perform relational reasoning is critical in both fields, since many believe that it may be one of the primary ingredients of intelligence. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_29_1_7 Its also critical to understanding the function of the hippocampus and entorhinal cortex in humans. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_3_3_0 This intriguing study proposes to modify the classical Q-learning paradigm by splitting the reward into two streams with different parameters, one for positive rewards and one for negative rewards. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_3_3_1 This model allows for more flexibility in modelling human behaviors in normal and pathological states ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_3_3_2 Although innovative and promising , the work is quite preliminary and would benefit from comparison and validation with real human behavior ['non', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_3_3_3 No comparison with human data ['con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_3_3_4 The figures are hard to parse because of the very short captions ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_3_3_5 One needs to go see Appendix C to understand what the model used (SQL) consists in ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_3_3_6 The work has promising implications for computational psychiatry , but probably not for RL at this point ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_3_3_7 It would be good to compare and fit the proposed models to real human/primate behavior in normal and pathological conditions and make testable predictions ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_3_3_8 Also, it would be very interesting to use these models to predict situations that might trigger maladaptive behaviors, by finding scenarios in which the pathological behavior becomes optimal. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_0 They make modifications to an existing generative model of natural images. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_1 They do not make direct comparisons to previous models or study quantitatively the results of the model with respect to its parameters ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_32_1_2 It is difficult to judge whether the new model is important because it has not been evaluated except by eye it does seem to reconstruct an image ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_3 They show images of a single reconstruction but no quantification of reconstruction quality or comparison to previous methods ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_32_1_4 In the spirit of insight it would have been very nice to have a quantification of error with respect to parameters (priors on slow identity, fast form). ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_5 If it had been evaluated and its efficacy varied in an interesting way with respect to the parameters of the model this could be a potentially important model to understand why the nervous system trades off between object identity associated features, transformation features, and speed. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_6 The statement that: GANs and VAE features are not typically interpretable. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_7 Seemed broad and was unsupported by any citations and to my knowledge GANs and VAEs have been used specifically to find interpretable features. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_8 Paper was organized, figures clear and readable. ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_32_1_9 Some development of the model could have been left to the references and didn't add much to their contribution (e.g. Taylor approximation to a Lie model) . ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_10 When they say steerable filter I was a little confused, do they just mean the basis vectors learned vary smoothly with respect to some affine transform parameter? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_11 Their statement of the novelty of their method: (1) allowing each feature to have its own transformation was not clear ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_32_1_12 Does this mean previous methods learned the same transformation for all features. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_13 They make an interesting connection to speed of processing that rapid changes better represented by the magnocellular pathway would be associated with transformations and slow parvo with identity ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_32_1_14 It was not clear though where they experimentally varied/tested this prior in their algorithm ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_32_1_15 So while an interesting connection they did not make clear where they substantively pursue it ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_32_1_16 They draw an analogy between the ventral and dorsal stream of cortex and bilinear models of images. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_17 The main place to improve is to have some quantitative analysis of the quality of their model perhaps MSE of image reconstruction. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_18 Then this evaluation could be used to study impacts of the parameters of their model which could then lead to neural hypotheses. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_19 They have some qualitative evaluation in images of filters but they could explore the parameter space to understand what led to these features. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_20 One of their stated novel contribution was that their filters were convolutional but they do not discuss the potential connection convolutional filters have to transformation of features which seemed like a gap ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_32_1_21 Weight sharing across shifted filters separates out feature and position yet many of their learned transformations are also translations. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_22 Is this an issue of spatial scale? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_32_1_23 This warranted some potentially interesting discussion though admittedly 4 pages isnt a lot of space. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_34_2_0 The surprisingly high power of randomly weighted DCNNs is a point that has popped up a couple of times in recent human fMRI / MEG work. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_34_2_1 The present paper makes the important case that random networks should be included as a matter of course in DCNN modelling projects, and sounds a note of caution about the field's temptation to over-interpret the particular features learned by high-performing trained networks ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_34_2_2 Comprehensive data measurement and modelling pipeline ['pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_34_2_3 Use of the same spatial transformer model with an interchangeable bank of input features is elegant ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_34_2_4 Very well written ['pro', 'pro', 'pro'] paper quality +neuroai19_34_2_5 Figures exceptionally detailed and thoroughly labelled ['pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_34_2_6 Methods described clearly and in good detail ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_34_2_7 Mostly neuroscientific, but addresses the important topic of how models from machine learning can best be used in neuro research ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_34_2_8 Generally, great paper ['non', 'non', 'pro', 'pro'] paper quality +neuroai19_34_2_9 Clear presentation of thorough work, exploring an important question ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_34_2_10 Would have been great to include another Imagenet-trained architecture, since different architectures have widely varying macaque brain predictivity, and that of VGG16 is not particularly high (Schrimpf et al., 2018 BrainScore). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_34_2_11 I'm not a big fan of the asterisks in Figures 3A and 3B used to indicate the best layers in various model tests ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_34_2_12 It doesn't provide any additional information to the data lines themselves, and it leads the reader to expect these indicate statistically significant comparisons ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_34_2_13 "Typo page 4 line 158: ""pray"" >> ""prey""" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_36_1_0 Premise is that feedback alignment networks are also more robust to adversarial attacks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_36_1_1 "The authors show because the ""gradient"" in the feedback pathway is a rough approximation, it is hard to use this gradient to train an adversarial attack." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_36_1_2 The basic premise is very strange ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_36_1_3 Adversarial attacks are artificial: attacker has access to gradient of the loss function. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_36_1_4 For FA networks, it's unclear why an attacker could not access true gradient, and be forced to use the approximate gradient ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_36_1_5 Overall the technical aspects of this paper seem sound ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_36_1_6 "No trouble understanding the material or writing By focusing on the more biologically plausible ""feedback alignment"" networks, the paper does sit at the intersection of neuro and AI" ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_36_1_7 However at present, adversarial attacks likely have much larger relevance to AI than neuro ['non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_36_1_8 The premise of the work must be clarified ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_36_1_9 As well as whether or how adversarial attacks (as framed) might have relevance to neuroscience ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_0 The paper provides a broadly useful synthesis of key differences between ANN and SNN approaches ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_37_3_1 However, the multiple grandiose statements, and some that are downright misleading left me puzzling what I learned ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_2 Its an opinion piece. ['non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_3 It offers a call to action to do more comp-neuro, in that it could revolutionise AI ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_37_3_4 "The paper opens ""In recent years we have made significant progress identifying computational principles that underlie neural function." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_5 "While not yet complete, we have sufficient evidence that a synthesis of these ideas could result in an understanding of how neural computation emerges from a combination of innate dynamics and plasticity"" What follows is a useful survey of a selection of ideas , by far not complete" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_6 For example, many of the interactions between myriad excitatory and inhibitory types across brains regions and neuromodulators, of which dopamine is just one of several, is largely unknown ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_7 Arguably ACh and noradrenaline are more important for network states and dynamics, and equally important for plasticity as dopamine. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_8 The dynamics of neuromodulation is largely unknown. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_9 Which leads me to a few concerns ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_10 "It is probable that revolutionary computational systems can be created in this way with only moderate expenditure of resources and effort"" Of course whole fields are working on this problem." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_11 Hardly what I'd call moderate effort ['con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_12 Claims of efficiency of more brain-like approaches compared to AI are disingenuous ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_13 A major draw-back of spiking models is that they are much more costly than ANNs, because of the small time-steps required. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_14 "Sure neuromorphic systems are coming, but not definitely not with moderate expenditure of resources and effort"" While it covers important ground , I think the arguments need more refinement and focus before they can inspire productive discussion" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_15 Its more a series of statements than a cleverly woven argument ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_16 But the individual statements are sometimes seductive ['non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_37_3_17 "For example ... ""A neuron simply sits and listens." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_18 When it hears an incoming pattern of spikes that matches a pattern it knows, it responds with a spike of its own. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_20 Repeat this process recursively tens to trillions of times, and suddenly you have a brain controlling a body in the world or doing something else equally clever. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_21 Our challenge is to understand how this occurs. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_22 "We require a new class of theories that dispose of the simplistic stimulus-driven encode/ transmit/decode doctrine. """ ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_23 "The devil is in the details, the ""how"" of ""suddenly""." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_24 "I feel this statement: ""Our challenge is to understand how this occurs." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_25 "We require a new class of theories that dispose of the simplistic stimulus-driven encode/ transmit/decode doctrine. """ ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_37_3_26 "Largely contradicts this one ""It is probable that revolutionary computational systems can be created in this way with only moderate expenditure of resources and effort"" I felt the paper could have done more to link with current state-of-the-art AI approaches" ['con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_27 There was an absence of nuance ['con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_37_3_28 While it covers important ground , I think the arguments need more refinement and focus before they can inspire productive discussion ['non', 'pro', 'pro', 'pro', 'pro', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_53_1_0 The authors consider how biologically motivated synaptic eligibility traces can be used for backpropagation-like learning, in particular by approximating local gradient computations in recurrent neural networks. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_53_1_1 This sheds new light on how artificial network algorithms might be implementable by the brain ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_53_1_2 Space is of course limited, but the mathematics presented seem to pass all sanity checks and gives sufficiently rigor to the authors' approach ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_53_1_3 It would have been nice to present a figure showing how e-prop yields eligibility traces resembling STDP, as this is one of the key connections of this work to biology ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_53_1_4 Given its technical details it was reasonably straightforward to follow ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_53_1_5 The authors directly tried to associate biological learning rules with deep network learning rules in AI. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_53_1_6 Gives important new results about how eligibility traces can be used to approximate gradients when adequately combined with a learning signal ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_53_1_7 While eligibility traces have received some attention in neuroscience their relevance to learning has not been thoroughly explored, so this paper makes a welcome contribution that fits well within the workshop goals ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_53_1_8 One part that would have been nice to clarify is the relative role of random feedback vs eligibility traces in successful network performance ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_53_1_9 It also would have been nice to comment on the relationship of this work to unsupervised (e.g. Hebbian-based) learning rules. ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_53_1_10 A final addition that would have made this work more compelling would have been to more thoroughly explore e-prop for computations that unfold on timescales beyond those built-in to the neurons (e.g. membrane or adaptation timescales) and which instead rely on reverberating network activity ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_0 The authors state three high-level improvements they want to make to CNN-based models of neural systems: 1 & 2) Capturing computational mechanisms and extracting conceptual insights. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_54_3_1 "Operationally, I'm not quite sure how these are different, so, to me this goal is roughly ""be explainable"", and progress towards it could be measured e.g. in MDLs." ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_54_3_2 3) Suggest testable hypotheses. ['non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_54_3_3 I agree these are good goals , and I think some progress is made , but that progress seems somewhat limited in scope ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_4 The technical aspects of the paper seem correct , though I have some higher-level conceptual concerns ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_5 1) If I understand correctly, attribution is computed only for a single OSR stimulus video ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_6 Is the attribution analysis stable for different stimulus frequencies? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_54_3_7 If not, is it really an explanation of the OSR? ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_54_3_8 "2) I agree with a concern raised by reviewer 3: It's difficult to see a 1-layer network as a ""mechanistic explanation"" of a 3-layer network" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_9 The flow/high-level organization of the paper works well ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_54_3_10 Explanations are mostly complete , though some details are missing ['pro', 'pro', 'pro', 'pro', 'non', 'non', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_11 e.g. what was the nonlinearity used in the model CNN ['non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_12 Also, do the CNN layers correspond to cell populations , and if so, why is it reasonable to collapse the time dimension after the first layer ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_54_3_13 I believe this paper is addressing questions that many of the workshop attendees will find interesting ['non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_59_3_0 The question of how networks maintain memory over long timescales is a longstanding and important one, and to my knowledge this question hasn't been thoroughly explored in spiking, trained recurrent neural networks (RNN). ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_59_3_1 The importance is tempered by the findings only covering what is to be expected, and not pushing beyond this or describing a path to push beyond this ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_2 The work would benefit from more detailed discussion of the training algorithm that provides some indication that the results aren't unduly sensitive to these details ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_3 In particular, the setting of synaptic decay constants is an important detail in a paper about working memory. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_59_3_4 A short discussion of other training algorithms (such as surrogate gradient or surrogate loss methods) and why the given one was chosen instead would have been helpful ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_5 A comparison with Bellec et al. 2018, which looks at working memory tasks in spiking networks, would also have been appropriate ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_6 The statistical tools are fairly well described and appear to be well-suited for illustrating the phenomena of interest ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_59_3_7 I feel that more tools should have been used to further support or push the results ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_8 For instance, while the heatmaps in Figure 3 provide visual evidence for their claims (except see my comments below), the work could have benefitted from a quantification of this evidence ['non', 'non', 'non', 'non', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_9 For instance, it is hard to see differences between the cue periods in the bottom two heatmaps, but differences may appear in some numerical measure of the average discriminability over these regions ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_10 The technical details are presented clearly on the whole ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_59_3_11 However, I feel that the work lacked clarity when it came to interpretation of the results ['non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_12 "For instance, the claim of ""stronger cue-specific differences across the cue stimulus window"" between fast and slow intrinsic timescale neurons in the RNN model isn't clearly supported by the heatmap in Figure 3 -- the cue-specific differences for the short instrinsic timescale group to me appears to be at least as great as that of the long intrinsic timescale group within the cue stimulus window" ['non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_13 I would be curious to know if making the input weaker or only giving it to a random subset of neurons makes this phenomenon more apparent ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_14 "It seems that one of the main points of the work is that ""longer intrinsic timescales correspond to more stable coding"", but I didn't find that this point was made very convincingly" ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_15 "The work would have benefited from a discussion of the implications of longer intrinsic timescale neurons retaining task-relevant information for longer -- in particular, this finding feels a bit ""trivial"" without the case being made for why this should push understanding in the field" ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_16 I think the interesting part may be in quantifying just how much of a difference there is between short and long timescale neurons -- for instance, does task-relevant information in both neuron groups fall off in a way that can be well predicted by their intrinsic time constants ['non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_17 How does this relate to their synaptic time constants ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_18 Does limiting the synaptic time constants limit the intrinsic time constants, and if so by how much ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality +neuroai19_59_3_19 The same type of comments apply to the second part of the results, which demonstrates that a task that doesn't require working memory results in neurons with shorter intrinsic timescales compared to the working memory task. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_59_3_20 The authors use an artificial network model to shed light on the biological mechanisms enabling and shaping working memory in the brain. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_59_3_21 The paper in the process reveals some (expected) results about how spiking RNNs behave on a working memory task ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_59_3_22 The proof-of-concept work (among others) that this can be done with spiking RNN may inspire more work in this area ['pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro', 'pro'] paper quality +neuroai19_59_3_23 The work is a basic proof-of-concept of results that may not do much to advance understanding since they are what one would expect to see (i.e. the antithesis of their thesis seems very unlikely). ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_59_3_24 Looking into the nuances of the explored phenomena may provide new information for the field. ['non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non', 'non'] paper quality +neuroai19_59_3_25 The paper should also seek to connect with more of the recent work being done in spiking recurrent neural networks ['con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con', 'con'] paper quality