,sentence,aspect_term_1,aspect_term_2,aspect_term_3,aspect_term_4,aspect_term_5,aspect_term_6,aspect_category_1,aspect_category_2,aspect_category_3,aspect_category_4,aspect_category_5,aspect_term_1_polarity,aspect_term_2_polarity,aspect_term_3_polarity,aspect_term_4_polarity,aspect_term_5_polarity,aspect_term_6_polarity,aspect_category_1_polarity,aspect_category_2_polarity,aspect_category_3_polarity,aspect_category_4_polarity,aspect_category_5_polarity 2,"Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods.[problem-NEU], [CMP-POS, EMP-POS]",problem,,,,,,CMP,EMP,,,,NEU,,,,,,POS,POS,,, 3,"In general I found this paper clearly written and technically sound.[paper-POS], [CLA-POS, EMP-POS]",paper,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 4,"I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures.[theoretical results-POS], [EMP-POS]",theoretical results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5,"Contribution: As discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units.[literature review section-NEU, previous results-NEU, paper-NEU], [CMP-NEU]",literature review section,previous results,paper,,,,CMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 6,"The analysis becomes considerably more complicated,[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7,"and the contribution seems to be novel and significant.[contribution-POS], [NOV-POS, IMP-POS]",contribution,,,,,,NOV,IMP,,,,POS,,,,,,POS,POS,,, 8,"I am not sure why did the authors mentioned the work on over-parameterization though.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 9,"It doesn't seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small).[results-NEU], [EMP-NEG]",results,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 10,"Comments on the Assumptions: - Please explain the motivation behind the standard Gaussian assumption of the input vector x.[motivations-NEU], [EMP-NEU]",motivations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 11,"- Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^top w_2 0 (or the acute angle assumption in Section 6).[motivations-NEU, assumption-NEU, Section-NEU], [EMP-NEU]",motivations,assumption,Section,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 12,"Without extra justifications, it seems that the theoretical result only holds for an artificial problem setting.[theoretical result-NEG], [EMP-NEG]",theoretical result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 13,"While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results.[motivations-NEU, impacts-NEU], [EMP-NEG, IMP-NEU]",motivations,impacts,,,,,EMP,IMP,,,,NEU,NEU,,,,,NEG,NEU,,, 14,"General Comment: The technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs.[technical section-NEG], [SUB-NEG]",technical section,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 15,"From the analysis in the main paper, I believe the theoretical contribution is correct and sound.[analysis-NEU, theoretical contribution-POS], [EMP-POS]",analysis,theoretical contribution,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 16,"While I appreciate the technical contributions,[technical contributions-POS], [EMP-POS]",technical contributions,,,,,,EMP,,,,,POS,,,,,,POS,,,, 17,"in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples).[motivations-NEU], [SUB-NEG]",motivations,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 18,"Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector,[assumptions-NEU], [SUB-NEU, EMP-NEU]",assumptions,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 19,"and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems. [Section-NEU], [SUB-NEU, EMP-NEU]",Section,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 21,"The propose data augmentation and BC learning is relevant, much robust than frequency jitter or simple data augmentation.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 22,"In equation 2, please check the measure of the mixture.[equation-NEU], [EMP-NEU]",equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 23,"Why not simply use a dB criteria ?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 24,"The comments about applying a CNN to local features or novel approach to increase sound recognition could be completed with some ICLR 2017 work towards injected priors using Chirplet Transform.[comments-NEU, novel approach-NEU], [NOV-NEU, CMP-NEU]",comments,novel approach,,,,,NOV,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 25,"The authors might discuss more how to extend their model to image recognition, or at least of other modalities as suggested.[discuss-NEU, model-NEU], [EMP-NEU]",discuss,model,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 26,"Section 3.2.2 shall be placed later on, and clarified.[Section-NEU], [CLA-NEU, PNF-NEU]",Section,,,,,,CLA,PNF,,,,NEU,,,,,,NEU,NEU,,, 27,"Discussion on mixing more than two sounds leads could be completed by associative properties, we think... ? [Discussion-NEU], [EMP-NEU]",Discussion,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 31,"I am overall a fan of the general idea of this paper; scaling up to huge inputs is definitely a necessary research direction for QA.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 32,"However, I have some concerns about the specific implementation and model discussed here.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 33,"How much of the proposed approach is specific to getting good results on bAbI (e.g., conditioning the knowledge encoder on only the previous sentence, time stamps in the knowledge tuple, super small RNNs, four simple functions in the n-gram machine, structure tweaking) versus having a general-purpose QA model for natural language?[proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 34,"Addressing some of these issues would likely prevent scaling to millions of (real) sentences, as the scalability is reliant on programs being efficiently executed (by simple string matching) against a knowledge storage.[issues-NEU], [SUB-NEG, EMP-NEG]",issues,,,,,,SUB,EMP,,,,NEU,,,,,,NEG,NEG,,, 35,"The paper is missing a clear analysis of NGM's limitations...[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 36,"the examples of knowledge storage from bAbI in the supplementary material are also underwhelming as the model essentially just has to learn to ignore stopwords since the sentences are so simple.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 37,"In its current form, I am borderline but leaning towards rejecting this paper.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 38,"Other questions: - is -gram really the most appropriate term to use for the symbolic representation?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 39,"N-grams are by definition contiguous sequences... The authors may want to consider alternatives.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 41,"The evaluations are only conducted on 5 of the 20 bAbI tasks, so it is hard to draw any conclusions from the results as to the validity of this approach.[evaluations-NEG], [SUB-NEG]",evaluations,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 42,"Can the authors comment on how difficult it will be to add functions to the list in Table 2 to handle the other 15 tasks? Or is NGM strictly for extractive QA?[Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 43,"- beam search is performed on each sentence in the input story to obtain knowledge tuples... while the answering time may not change (as shown in Figure 4) as the input story grows, the time to encode the story into knowledge tuples certainly grows, which likely necessitates the tiny RNN sizes used in the paper.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 44,"How long does the encoding time take with 10 million sentences?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 45,"- Need more detail on the programmer architecture, is it identical to the one used in Liang et al., 2017? [detail-NEU], [SUB-NEU, EMP-NEU]",detail,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 51,"None of these ideas are new before but I haven't seen them combined in this way before.[ideas-NEU], [NOV-NEG]",ideas,,,,,,NOV,,,,,NEU,,,,,,NEG,,,, 52,"This is a very practical idea, well-explained with a thorough set of experiments across three different tasks.[idea-POS, paper-POS], [EMP-POS]",idea,paper,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 53,"The paper is not surprising[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 54,"but this seems like an effective technique for people who want to build effective systems with whatever data they've got. [technique-POS], [EMP-POS]]",technique,,,,,,EMP,,,,,POS,,,,,,POS,,,, 57,"The exposition of the model architecture could use some additional detail to clarify some steps and possibly fix some minor errors (see below).[model architecture-NEG, detail-NEG], [SUB-NEG]",model architecture,detail,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 58,"I would prefer less material but better explained.[material-NEU], [EMP-NEU]",material,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 60,"The paper could be more focused around a single scientific question: does the PATH function as formulated help?[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 61,"The authors do provide a novel formulation and demonstrate the gains on a variety of concrete problems taken form the literature.[experiments-POS, problems-POS], [NOV-POS]",experiments,problems,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 62,"I also like that they try to design experiments to understand the role of specific parts of the proposed architecture.[experiments-POS, proposed architecture-POS], [EMP-POS]",experiments,proposed architecture,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 63,"The graphs are WAY TOO SMALL to read.[graphs-NEG], [PNF-NEG]",graphs,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 64,"Figure #s are missing off several figures.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 65,"MODEL & ARCHITECTURE The PATH function given a current state s and a goal state s', returns a distribution over the best first action to take to get to the goal P(A).[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 66,"( If the goal state s' was just the next state, then this would just be a dynamics model and this would be model-based learning?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 67,"So I assume there are multiple steps between s and s'?).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 68,"At the beginning of section 2.1, I think the authors suggest the PATH function could be pre-trained independently by sampling a random state in the state space to be the initial state and a second random state to be the goal state and then using an RL algorithm to find a path.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 69,"Presumably, once one had found a path ( (s, a0), (s1, a1), (s2, a2), ..., (sn-1,an-1), s' ) one could then train the PATH policy on the triple (s, s', a0) ?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 70,"This seems like a pretty intense process: solving some representative subset of all possible RL problems for a particular environment ... Maybe one choses s and s' so they are not too far away from each other (the experimental section later confirms this distance is > 7.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 71,"Maybe bring this detail forward)?[detail-NEU], [EMP-NEU]",detail,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 72,"The expression Trans'( (s,s), a)' (Trans(s,a), s') was confusing.[expression-NEG], [CLA-NEG]",expression,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 73,"I think the idea here is that the expression Trans'( (s,s) , a )' represents the n-step transition function and 'a' represents the first action?[expression-NEU], [EMP-NEU]",expression,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 74,"The second step is to train the goal function for a specific task.[task-NEU], [EMP-NEU]",task,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 75,"So I gather our policy takes the form of a composed function and the chain rule gives close to their expression in 2.2[expression-NEU], [EMP-NEU]",expression,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 78,"What is confusing is that they define A( s, a, th^p, th^g, th^v ) sum_i gamma^i r_{t+1} + gamma^k V( s_{t+k} ; th^v ) - V( s_t ; th^v )[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 79,"The left side contains th^p and th^g, but the right side does not.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 80,"Should these parameters be take out of the n-step advantage function A?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 81,"The second alternative for training the goal function tau seems confusing.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 82,"I get that tau is going to be constrained by whatever representation PATH function was trained on and that this representation might affect the overall performance - performance.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 83,"I didn't get the contrast with method one.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 84,"How do we treat the output of Tau as an action?[output-NEU], [EMP-NEU]",output,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 85,"Are you thinking of the gradient coming back through PATH as a reward signal?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 86,"More detail here would be helpful.[detail-NEG], [SUB-NEG]",detail,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 87,"EXPERIMENTS: Lavaworld: authors show that pretraining the PATH function on longer 7-11 step policies leads to better performance when given a specific Lava world problem to solve.[performance-POS, problem-POS], [EMP-POS]",performance,problem,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 88,"So the PATH function helps and longer paths are better.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 90,"What is the upper bound on the size of PATH lengths you can train?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 92,"From a scientific point of view, this seems orthogonal to the point of the paper, though is relevant if you were trying to build a system.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 94,"This isn't too surprising.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 95,"Both picking up the passenger (reachability) and dropping them off somewhere are essentially the same task: moving to a point.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 96,"It is interesting that the Task function is able to encode the higher level structure of the TAXI problem's two phases.[Task function-POS], [EMP-POS]",Task function,,,,,,EMP,,,,,POS,,,,,,POS,,,, 97,"Another task you could try is to learn to perform the same task in two different environments.[task-POS], [EMP-POS]",task,,,,,,EMP,,,,,POS,,,,,,POS,,,, 98,"Perhaps the TAXI problem, but you have two different taxis that require different actions in order to execute the same path in state space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 99,"This would require a phi(s) function that is trained in a way that doesn't depend on the action a.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 101,"Is this where you artificially return an agent to a state that would normally be hard to reach?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 102,"The authors show that UA results in gains on several of the games.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 103,"The authors also demonstrate that using multiple agents with different policies can be used to collect training examples for the PATH function that improve its utility over training examples collected by a single agent policy.[training examples-NEU], [EMP-NEU]",training examples,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 104,"RELATED WORK: Good contrast to hierarchical learning: we don't have switching regimes here between high-level options[regimes-POS], [CMP-POS]",regimes,,,,,,CMP,,,,,POS,,,,,,POS,,,, 105,"I don't understand why the authors say the PATH function can be viewed as an inverse?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 106,"Oh - now I get it. Because it takes an extended n-step transition and generates an action.[null], [CLA-POS]",null,,,,,,CLA,,,,,,,,,,,POS,,,, 108,"-I think title is misleading, as the more concise results in this paper is about linear networks I recommend adding linear in the title i.e. changing the title to ... deep LINEAR networks[title-NEG, results-NEU], [EMP-NEU, PNF-NEG]",title,results,,,,,EMP,PNF,,,,NEG,NEU,,,,,NEU,NEG,,, 109,"- Theorems 2.1, 2.2 and the observation (2) are nice![Theorems-POS, observation-POS], [EMP-POS]",Theorems,observation,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 110,"- Theorem 2.2 there is no discussion about the nature of the saddle point is it strict?[Theorem-NEU, discussion-NEG], [SUB-NEG]",Theorem,discussion,,,,,SUB,,,,,NEU,NEG,,,,,NEG,,,, 111,"Does this theorem imply that the global optima can be reached from a random initialization?[theorem-NEU], [EMP-NEU]",theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 112,"Regardless of if this theorem can deal with these issues, a discussion of the computational implications of this theorem is necessary.[theorem-NEU, issues-NEU, discussion-NEU], [SUB-NEU]",theorem,issues,discussion,,,,SUB,,,,,NEU,NEU,NEU,,,,NEU,,,, 113,"- I'm a bit puzzled by Theorems 4.1 and 4.2 and why they are useful.[Theorems-NEU], [EMP-NEU]",Theorems,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 114,"Since these results do not seem to have any computational implications about training the neural nets what insights do we gain about the problem by knowing this result? [results-NEG, insights-NEU, problem-NEU], [EMP-NEG]",results,insights,problem,,,,EMP,,,,,NEG,NEU,NEU,,,,NEG,,,, 115,"Further discussion would be helpful. [discussion-NEU], [SUB-NEU]",discussion,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 120,"The performance improvement is expected and validated by experiments.[performance-POS, experiments-POS], [EMP-POS]",performance,experiments,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 121,"But I am not sure if the novelty is strong enough for an ICLR paper. [novelty-NEU], [APR-NEU, NOV-NEU]",novelty,,,,,,APR,NOV,,,,NEU,,,,,,NEU,NEU,,, 125,"The suggested techniques are nice and show promising results.[techniques-POS, results-POS], [EMP-POS]",techniques,results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 126,"But I feel a lot can still be done to justify them, even just one of them.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 127,"For instance, the authors manipulate the objective of G using a new parameter alpha_new and divide heuristically the range of its values.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 128,"But, in the experimental section results are shown only for a single value, alpha_new 0.9 The authors also suggest early stopping but again (as far as I understand) only a single value for the number of iterations was tested.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 129,"The writing of the paper is also very unclear, with several repetitions and many typos e.g.: 'we first introduce you a' 'architexture' 'future work remain to' 'it self' I believe there is a lot of potential in the approach(es) presented in the paper.[writing-NEG, typos-NEG], [CLA-NEG]",writing,typos,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 130,"In my view a much stronger experimental section together with a clearer presentation and discussion could overcome the lack of theoretical discussion.[experimental section-NEU, theoretical discussion-NEG], [EMP-NEU, SUB-NEG]",experimental section,theoretical discussion,,,,,EMP,SUB,,,,NEU,NEG,,,,,NEU,NEG,,, 134,"Using this setup, the authors are able to beat sequence to sequence baselines on problems that are amenable to such an approach.[setup-POS, baselines-NEU, problems-NEU, approach-NEU], [EMP-POS]",setup,baselines,problems,approach,,,EMP,,,,,POS,NEU,NEU,NEU,,,POS,,,, 136,"In all three cases, the proposed solution outperforms the baselines on larger problem instances. [proposed solution-POS, baselines-NEU], [EMP-POS]",proposed solution,baselines,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 138,"Quality This is a very clear contribution which elegantly demonstrates the use of extensions of GAN variants in the context of neuroimaging.[contribution-POS], [CLA-POS, IMP-POS]",contribution,,,,,,CLA,IMP,,,,POS,,,,,,POS,POS,,, 139,"Clarity The paper is well-written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 140,"Methods and results are clearly described.[Methods-POS, results-POS], [CLA-POS]",Methods,results,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 141,"The authors state significant improvements in classification using generated data.[improvements-POS], [EMP-POS]",improvements,,,,,,EMP,,,,,POS,,,,,,POS,,,, 142,"These claims should be substantiated with significance tests comparing classification on standard versus augmented datasets.[claims-NEU], [EMP-NEU]",claims,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 143,"Originality This is one of the first uses of GANs in the context of neuroimaging.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 144,"Significance The approach outlined in this paper may spawn a new research direction.[approach-POS], [IMP-POS]",approach,,,,,,IMP,,,,,POS,,,,,,POS,,,, 145,"Pros Well-written and original contribution demonstrating the use of GANs in the context of neuroimaging.[contribution-POS], [CLA-POS, NOV-POS]",contribution,,,,,,CLA,NOV,,,,POS,,,,,,POS,POS,,, 146,"Cons The focus on neuroimaging might be less relevant to the broader AI community.[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 149,"This is a significant topic with implications for quantization for computational efficiency, as well as for exploring the space of learning algorithms for deep networks.[topic-POS], [IMP-POS]",topic,,,,,,IMP,,,,,POS,,,,,,POS,,,, 150,"While none of the contributions are especially novel,[contributions-NEG], [NOV-NEG]",contributions,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 151,"the analysis is clear and well-organized, and the authors do a nice job in connecting their analysis to other work.[analysis-POS], [CLA-POS, PNF-NEG, CMP-POS]]",analysis,,,,,,CLA,PNF,CMP,,,POS,,,,,,POS,NEG,POS,, 153,"Overall, the paper is sloppily put together, so it's a little difficult to assess the completeness of the ideas.[paper-NEG, ideas-NEG], [PNF-NEG, CLA-NEG]",paper,ideas,,,,,PNF,CLA,,,,NEG,NEG,,,,,NEG,NEG,,, 154,"The problem being solved is not literally the problem of decreasing the amount of data needed to learn tasks, but a reformulation of the problem that makes it unnecessary to relearn subtasks.[problem-NEG], [EMP-NEG]",problem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 155,"That's a good idea, but problem reformulation is always hard to justify without returning to a higher level of abstraction to justify that there's a deeper problem that remains unchanged.[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 156,"The paper doesn't do a great job of making that connection.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 157,"The idea of using task decomposition to create intrinsic rewards seems really interesting,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 158,"but does not appear to be explored in any depth.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 159,"Are there theorems to be had?[theorems-NEU], [EMP-NEU]",theorems,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 160,"Is there a connection to subtasks rewards in earlier HRL papers?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 161,"The lack of completeness (definitions of tasks and robustness) also makes the paper less impactful than it could be.[paper-NEG], [IMP-NEG, SUB-NEG]",paper,,,,,,IMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 162,"Detailed comments: learn hierarchical policies -> learns hierarchical policies?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 163,"games Mnih et al. (2015)Silver et al. (2016),: The citations are a mess.[citations-NEG], [CMP-NEG]",citations,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 165,"and is hardly reusable -> and are hardly reusable.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 166,"Skill composition is the idea of constructing new skills with existing skills ( -> Skill composition is the idea of constructing new skills out of existing skills (. to synthesis -> to synthesize.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 167,"set of skills are -> set of skills is. automatons -> automata.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 168,"with low-level controllers can -> with low-level controllers that can.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 169,"the options policy u03c0 o is followed until u03b2(s) > threshold: I don't think that's how options were originally defined... beta is generally defined as a termination probability.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 170,"The translation from TLTL formula FSA to -> The translation from TLTL formula to FSA?[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 171,"four automaton states Qu03c6 {q0, qf , trap}: Is it three or four?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 172,"learn a policy that satisfy -> learn a policy that satisfies.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 173,"HRL, We introduce the FSA augmented MDP -> HRL, we introduce the FSA augmented MDP..[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 174,"multiple options policy separately -> multiple options policies separately?[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 175,"Given flat policies u03c0u03c61 and u03c0u03c62 that satisfies -> Given flat policies u03c0u03c61 and u03c0u03c62 that satisfy .[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 176,"s illustrated in Figure 3 . -> s illustrated in Figure 2 .?[Figure-NEU], [CLA-NEU]",Figure,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 177,", we cam simply -> , we can simply.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 178,"Figure 4 . -> Figure 4.[Figure-NEU], [CLA-NEU]",Figure,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 179,". , disagreement emerge -> , disagreements emerge?[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 180,"The paper needs to include SOME definition of robustness, even if it just informal.[paper-NEU], [SUB-NEU]",paper,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 181,"As it stands, it's not even clear if larger values are better or worse.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 182,"(It would seem that *more* robustness is better than less, but the text says that lower values are chosen.)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 183,"with 2 hidden layers each of 64 relu: Missing word?Or maybe a comma?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 184,"to aligns with -> to align with.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 185,"a set of quadratic distance function -> a set of quadratic distance functions.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 186,"satisfies task the specification) -> satisfies the task specification).[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 187,"Figure 4: Tasks 6 and 7 should be defined in the text someplace.[Figure-NEU, Tasks-NEU], [CLA-NEU, SUB-NEU]",Figure,Tasks,,,,,CLA,SUB,,,,NEU,NEU,,,,,NEU,NEU,,, 188,"current frame work i -> current framework i.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 189,"and choose to follow -> and chooses to follow.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 190,"this makes -> making.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 191,"each subpolicies -> each subpolicy. [null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 194,"The method uses a learnable character embedding to transform the data, but is an end-to-end approach[method-NEU, approach-NEU], [EMP-NEU]",method,approach,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 195,". The analysis of squared error for the price regression shows a clear advantage of the method over previous models that used hand crafted features.[method-POS, previous models-POS], [CMP-POS, EMP-POS]",method,previous models,,,,,CMP,EMP,,,,POS,POS,,,,,POS,POS,,, 196,"Here are my concerns: 1) As the price shows a high skewness in Fig. 1, it may make more sense to use relative difference instead of absolute difference of predicted and actual auction price in evaluating/training each model.[Fig-NEU], [EMP-NEU]",Fig,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 197,"That is, making an error of $100 for a plate that is priced $1000 has a huge difference in meaning to that for a plate priced as $10,000.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 198,"2) The time-series data seems to have a temporal trend which makes retraining beneficial as suggested by authors in section 7.2.[section-POS], [EMP-POS]",section,,,,,,EMP,,,,,POS,,,,,,POS,,,, 199,"If so, the evaluation setting of dividing data into three *random* sets of training, validation, and test, in 5.3 doesn't seem to be the right and most appropriate choice.[setting-NEG], [EMP-NEG]",setting,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 200,"It should however, be divided into sets corresponding to non-overlapping time intervals to avoid the model use of temporal information in making the prediction.[prediction-NEU], [EMP-NEU]]",prediction,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 203,"Experiments are performed on an simple domain which nicely demonstrates its properties, as well as on continuous control problems, where the technique outperforms or is competitive with DDPG.[Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 204,"The paper is very clearly written and easy to read, and its contributions are easy to extract.[paper-POS, contributions-POS], [CLA-POS]",paper,contributions,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 205,"The appendix is quite necessary for the understanding of this paper, as all proofs do not fit in the main paper.[appendix-NEU, paper-NEU], [PNF-NEU]",appendix,paper,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 206,"The inclusion of proof summaries in the main text would strengthen this aspect of the paper.[summaries-NEU, main text-NEU, paper-NEU], [EMP-NEU]",summaries,main text,paper,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 207,"On the negative side, the paper fails to make a strong case for significant impact of this work; the solution to this, of course, is not overselling benefits, but instead having more to say about the approach or finding how to produce much better experimental results than the comparative techniques.[paper-NEG, benefits-NEG, approach-NEG, experimental results-NEG], [SUB-NEU, IMP-NEG]",paper,benefits,approach,experimental results,,,SUB,IMP,,,,NEG,NEG,NEG,NEG,,,NEU,NEG,,, 208,"In other words, the slightly more stable optimization and slightly smaller hyperparameter search for this approach is unlikely to result in a large impact.[approach-NEG], [IMP-NEG]",approach,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 209,"Overall, however, I found the paper interesting, readable, and the technique worth thinking about, so I recommend its acceptance.[paper-POS, technique-POS], [CLA-POS, REC-POS, EMP-POS]",paper,technique,,,,,CLA,REC,EMP,,,POS,POS,,,,,POS,POS,POS,, 214,"The paper seems to have weaknesses pertaining to the approach taken, clarity of presentation and comparison to baselines which mean that the paper does not seem to meet the acceptance threshold for ICLR.[paper-NEG], [APR-NEG, PNF-NEG]",paper,,,,,,APR,PNF,,,,NEG,,,,,,NEG,NEG,,, 216,"**Strengths** I like the high-level motivation of the work, that one needs to understand and establish that language or semantics can help learn better representations for images. [motivation-POS], [EMP-POS]",motivation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 217,"I buy the premise and think the work addresses an important issue.[issue-POS], [IMP-POS]",issue,,,,,,IMP,,,,,POS,,,,,,POS,,,, 218,"**Weakness** Approach: * A major limitation of the model seems to be that one needs access to both images and attribute vectors at inference time to compute representations which is a highly restrictive assumption (since inference networks are discriminative).[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 221,"Clarity: * Eqn. 5, LHS can be written more clearly as hat{a}_k.[Eqn-NEG], [CLA-NEG]",Eqn,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 222,"* It would also be good to cite the following related work, which closely ties into the model of Eslami 2016, and is prior work: Efficient inference in occlusion-aware generative models of images, Jonathan Huang, Kevin Murphy.[related work-NEU], [SUB-NEU]",related work,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 225,"This is not natural language, firstly because the language in the dataset is synthetically generated and not ""natural"".[dataset-NEG], [EMP-NEG]",dataset,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 226,"Secondly, the approach parses this ""synthetic"" language into structured tuples which makes it even less natural.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 227,"Also, Page. 3. What does ""partial descriptions"" mean?[Page-NEU], [EMP-NEU]",Page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 228,"* Section 3: It would be good to explicitly draw out the graphical model for the proposed approach and clarify how it differs from prior work (Eslami, 2016).[proposed approach-NEU], [CMP-NEU]",proposed approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 229,"* Sec. 3. 4 mentions that the ""only image"" encoder is used to obtain the representation for the image, but the ""only image"" encoder is expected to capture the ""indescribable component"" from the image, then how is the attribute information from the image captured in this framework?[Sec-NEU], [EMP-NEU]",Sec,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 231,"In general, the writing and presentation of the model seem highly fragmented, and it is not clear what the specifics of the overall model are.[writing-NEG, presentation-NEG, model-NEG], [CLA-NEG, PNF-NEG]",writing,presentation,model,,,,CLA,PNF,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 232,"For instance, in the decoder, the paper mentions for the first time that there are variables ""z"", but does not mention in the encoder how the variables ""z"" were obtained in the first place (Sec. 3.1).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 234,"at every timestep which is used in a similar manner to Eqn. 2 in Eslami, 2016.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 235,"Sec. 3.4 ""GEN Image Encoder"" has some typo, it is not clear what the conditioning is within q(z) term.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 237,"This seems like an important baseline to report for the image caption ranking task.[baseline-NEU], [CMP-NEU]",baseline,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 238,"2. Another crucial baseline is to train the Attend, Infer, Repeat model on the ShapeWorld images, and then take the latent state inferred at every step by that model, and use those features instead of the features described in Sec. 3.4[baseline-NEU], [CMP-NEU]",baseline,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 239,"""Gen Image Encoder"" and repeat the rest of the proposed pipeline.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 240,"Does the proposed approach still show gains over Attend Infer Repeat?[proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 241,"3. The results shown in Fig. 7 are surprising -- in general, it does not seem like a regular VAE would do so poorly.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 242,"Are the number of parameters in the proposed approach and the baseline VAE similar? [proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 243,"Are the choices of decoder etc. similar?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 244,"Did the model used for drawing Fig. 7 converge?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 245,"Would be good to provide its training curve.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 246,"Also, it would be good to evaluate the AIR model from Eslami, 2016 on the same simple shapes dataset and show unconditional samples.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 247,"If the claim from the work is true, that model should be just as bad as a regular VAE and would clearly establish that using language is helping get better image samples.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 248,"* Page 2: In general the notion of separating the latent space into content and style, where we have labels for the ""content"" is an old idea that has appeared in the literature and should be cited accordingly.[literature-NEU], [CMP-NEU]",literature,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 260,"The authors compared Dauto with several baseline methods on several datasets and showed improvement.[baseline methods-POS, datasets-POS], [CMP-POS, EMP-POS]",baseline methods,datasets,,,,,CMP,EMP,,,,POS,POS,,,,,POS,POS,,, 261,"The paper is well-organized and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 262,"The probabilistic framework itself is quite straight-forward.[framework-POS], [EMP-POS]",framework,,,,,,EMP,,,,,POS,,,,,,POS,,,, 263,"The paper will be more interesting if the authors are able to extend the discussion on different forms of prior instead of the simple parameter sharing scheme.[paper-NEU, discussion-NEG], [SUB-NEG]",paper,discussion,,,,,SUB,,,,,NEU,NEG,,,,,NEG,,,, 266,"It would be interesting to see if the additional auto-encoder part help address the issue.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 267,"The experiments miss some of the more recent baseline in domain adaptation, such as Adversarial Discriminative Domain Adaptation (Tzeng, Eric, et al. 2017).[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 268,"It could be more meaningful to organize the pairs in table by target domain instead of source, for example, grouping 9->9, 8->9, 7->9 and 3->9 in the same block.[table-NEU], [PNF-NEG]",table,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 269,"DAuto does seem to offer more boost in domain pairs that are less similar.[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 271,"(1) The topic of this paper seems to have minimal connection with ICRL.[topic-NEG], [APR-NEG]",topic,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 272,"It might be more appropriate for this paper to be reviewed at a control/optimization conference, so that all the technical analysis can be evaluated carefully.[paper-NEU], [APR-NEG]",paper,,,,,,APR,,,,,NEU,,,,,,NEG,,,, 273,"(2) I am not convinced if the main results are novel.[main results-NEG], [NOV-NEG]",main results,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 274,"The convergence of policy gradient does not rely on the convexity of the loss function, which is known in the community of control and dynamic programming.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 275,"The convergence of policy gradient is related to the convergence of actor-critic, which is essentially a form of policy iteration. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 276,"I am not sure if it is a good idea to examine the convergence purely from an optimization perspective.[idea-NEU], [EMP-NEG]",idea,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 277,"(3) The main results of this paper seem technical sound.[main results-POS], [EMP-POS]",main results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 278,"However, the results seem a bit limited because it does not apply to neural-network function approximator. [results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 279,"It does not apply to the more general control problem rather than quadratic cost function, which is quite restricted.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 281,"I strongly suggest that these results be submitted to a more suitable venue. [results-NEU], [APR-NEG]",results,,,,,,APR,,,,,NEU,,,,,,NEG,,,, 288,"The experimental results are very good and give strong support for the proposed normalization.[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 289,"While the main idea is not new to machine learning (or deep learning), to the best of my knowledge it has not been applied on GANs.[main idea-NEG], [NOV-NEG]",main idea,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 290,"The paper is overall well written (though check Comment 3 below), it covers the related work well and it includes an insightful discussion about the importance of high rank models.[paper-POS, related work-POS, discussion-POS, models-POS], [CLA-POS, SUB-POS, CMP-POS]",paper,related work,discussion,models,,,CLA,SUB,CMP,,,POS,POS,POS,POS,,,POS,POS,POS,, 291,"I am recommending acceptance,[null], [REC-POS]",null,,,,,,REC,,,,,,,,,,,POS,,,, 292,"though I anticipate to see a more rounded evaluation of the exact mechanism under which SN improves over the state of the art.[evaluation-NEU], [SUB-NEU]",evaluation,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 294,"Comments: 1. One concern about this paper is that it doesn't fully answer the reasons why this normalization works better.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 295,"I found the discussion about rank to be very intuitive,[discussion-POS], [EMP-POS]",discussion,,,,,,EMP,,,,,POS,,,,,,POS,,,, 296,"however this intuition is not fully tested.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 298,"The authors claim that other methods, like (Arjovsky et al. 2017) also suffer from the same rank deficiency.[methods-NEU], [EMP-NEU]",methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 301,"One way to test the rank hypothesis and better explain this method is to run a couple of truncated-SN experiments.[method-NEU, experiments-NEU], [EMP-NEU]",method,experiments,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 302,"What happens if you run your SN but truncate its spectrum after every iteration in order to make it comparable to the rank of WN? Do you get comparable inception scores? Or does SN still win?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 303,"3. Section 4 needs some careful editing for language and grammar. [Section-NEU, grammar-NEG], [CLA-NEG]",Section,grammar,,,,,CLA,,,,,NEU,NEG,,,,,NEG,,,, 310,"Some suggestions / criticisms are given below. 1) The findings seem conceptually similar to the older sparse coding ideas from the visual cortex.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 311,"That connection might be worth discussing because removing the regularizing (i.e., metabolic cost) constraint from your RNNS makes them learn representations that differ from the ones seen in EC.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 312,"The sparse coding models see something similar: without sparsity constraints, the image representations do not resemble those seen in V1, but with sparsity, the learned representations match V1 quite well.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 313,"That the same observation is made in such disparate brain areas (V1, EC) suggests that sparsity / efficiency might be quite universal constraints on the neural code.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 314,"2) The finding that regularizing the RNN makes it more closely match the neural code is also foreshadowed somewhat by the 2015 Nature Neuro paper by Susillo et al.[finding-NEU], [CMP-NEU]",finding,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 319,"3) Why the different initializations for the recurrent weights for the hexagonal vs other environments?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 320,"I'm guessing it's because the RNNs don't work in all environments with the same initialization (i.e., they either don't look like EC, or they don't obtain small errors in the navigation task).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 321,"That seems important to explain more thoroughly than is done in the current text.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 322,"4) What happens with ongoing training?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 324,"With on-going (continous) training, do the RNN neurons' spatial tuning remain stable, or do they continue to drift (so that border cells turn into grid cells turn into irregular cells, or some such)? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 325,"That result could make some predictions for experiment, that would be testable with chronic methods (like Ca2+ imaging) that can record from the same neurons over multiple experimental sessions.[result-NEU, experiment-NEU], [EMP-NEU]",result,experiment,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 326,"5) It would be nice to more quantitatively map out the relation between speed tuning, direction tuning, and spatial tuning (illustrated in Fig. 3).[Fig-NEU], [SUB-NEU]",Fig,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 327,"Specifically, I would quantify the cells' direction tuning using the circular variance methods that people use for studying retinal direction selective neurons.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 328,"And I would quantify speed tuning via something like the slope of the firing rate vs speed curves.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 329,"And quantify spatial tuning somehow (a natural method would be to use the sparsity measures sometimes applied to neural data to quantify how selective the spatial profile is to one or a few specific locations).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 330,"Then make scatter plots of these quantities against each other.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 331,"Basically, I'd love to see the trends for how these types of tuning relate to each other over the whole populations: those trends could then be tested against experimental data (possibly in a future study).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 338,"The reasoning here is that the image feature space may not be semantically organized so that we are not guaranteed that a small perturbation of an image vector will yield image vectors that correspond to semantically similar images (belonging to the same class).[reasoning-NEU], [EMP-NEU]",reasoning,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 346,"They claim that these augmentation types provide orthogonal benefits and can be combined to yield superior results.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 347,"Overall I think this paper addresses an important problem in an interesting way,[paper-POS, problem-NEU], [EMP-POS]",paper,problem,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 348,"but there is a number of ways in which it can be improved, detailed in the comments below.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 349,"Comments: -- Since the authors are using a pre-trained VGG for to embed each image, I'm wondering to what extent they are actually doing one-shot learning here.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 350,"In other words, the test set of a dataset that is used for evaluation might contain some classes that were also present in the training set that VGG was originally trained on.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 352,"Can the VGG be instead trained from scratch in an end-to-end way in this model?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 353,"-- A number of things were unclear to me with respect to the details of the training process: the feature extractor (VGG) is pre-trained.[training process-NEU], [CLA-NEG]",training process,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 354,"Is this finetuned during training?[training-NEU], [EMP-NEU]",training,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 355,"If so, is this done jointly with the training of the auto-encoder?[training-NEU], [EMP-NEU]",training,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 356,"Further, is the auto-encoder trained separately or jointly with the training of the one-shot learning classifier?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 357,"-- While the authors have convinced me that data augmentation indeed significantly improves the performance in the domains considered (based on the results in Table 1 and Figure 5a),[performance-POS, results-POS, Table-NEU, Figure-NEU], [EMP-POS]",performance,results,Table,Figure,,,EMP,,,,,POS,POS,NEU,NEU,,,POS,,,, 358,"I am not convinced that augmentation in the proposed manner leads to a greater improvement than just augmenting in the image feature domain.[improvement-NEU], [EMP-NEG]",improvement,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 359,"In particular, in Table 2, where the different types of augmentation are compared against each other, we observe similar results between augmenting only in the image feature space versus augmenting only in the semantic feature space (ie we observe that FeatG performs similarly as SemG and as SemN).[Table-NEU, results-NEG], [EMP-NEG]",Table,results,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 360,"When combining multiple types of augmentation the results are better,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 362,"Specifically, the authors say that for each image they produce 5 additional virtual data points, but when multiple methods are combined, does this mean 5 from each method? Or 5 overall? If it's the former, the increased performance may merely be attributed to using more data.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 364,"-- Comparison with existing work: There has been a lot of work recently on one-shot and few-shot learning that would be interesting to compare against.[work-NEU], [CMP-NEU, SUB-NEU]",work,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 365,"In particular, mini-ImageNet is a commonly-used benchmark for this task that this approach can be applied to for comparison with recent methods that do not use data augmentation.[benchmark-NEU, task-NEU, comparison-NEU], [CMP-NEU, SUB-NEU]",benchmark,task,comparison,,,,CMP,SUB,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 369,"-- A suggestion: As future work I would be very interested to see if this method can be incorporated into common few-shot learning models to on-the-fly generate additional training examples from the support set of each episode that these approaches use for training.[future work-NEU, method-NEU, approaches-NEU], [IMP-NEU]",future work,method,approaches,,,,IMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 373,"I like the presentation and writing of this paper.[presentation-POS, writing-POS], [CLA-POS, PNF-POS]",presentation,writing,,,,,CLA,PNF,,,,POS,POS,,,,,POS,POS,,, 374,"However, I find it uneasy to fully evaluate the merit of this paper, mainly because the wide-layer assumption seems somewhat artificial and makes the corresponding results somewhat expected.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 376,"This is not surprising.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 377,"It would be interesting to make the results more quantitive, e.g., to quantify the tradeoff between having local minimums and having nonzero training error. [results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 380,"Overall, I feel that the paper is hard to understand and that it would benefit from more clarity, e.g., section 3.3 states that decoding from the softmax q-distribution is similar to the Bayes decision rule.[paper-NEG, section-NEU], [CLA-NEG, PNF-NEG]",paper,section,,,,,CLA,PNF,,,,NEG,NEU,,,,,NEG,NEG,,, 381,"Please elaborate on this.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 382,"Did you compare to minimum bayes risk decoding which chooses the output with the lowest expected risk amongst a set of candidates?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 384,"However, the methods analyzed in this paper also require sampling (cf. Appendix D.2.4 where you mention a sample size of 10),[methods-NEU], [SUB-NEU, EMP-NEU]",methods,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 385,"Please explain the difference.[difference-NEU], [SUB-NEU, EMP-NEU]",difference,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 391,"An experimental comparison is needed.[experimental comparison-NEU], [CMP-NEU]",experimental comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 392,"Cotterell et al., EACL 2017 Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis: This paper also derives a tensor factorization based approach for learning word embeddings for different covariates.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 394,"Due to these two citations, the novelty of both the problem set-up of learning different embeddings for each covariate and the novelty of the tensor factorization based model are limited.[citations-NEG, novelty-NEG], [NOV-NEG]",citations,novelty,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 395,"The writing is ok.[writing-NEU], [CLA-NEU]",writing,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 396,"I appreciated the set-up of the introduction with the two questions.[setup-POS, introduction-POS], [PNF-POS]",setup,introduction,,,,,PNF,,,,,POS,POS,,,,,POS,,,, 397,"However, the questions themselves could have been formulated differently: Q1: the way Q1 is formulated makes it sound like the covariates could be both discrete and continuous while the method presented later in the paper is only for discrete covariates (i.e. group structure of the data).[questions-NEU], [EMP-NEU]",questions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 398,"Q2: The authors mention topic alignment without specifying what the topics are aligned to.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 399,"It would be clearer if they stated explicitly that the alignment is between covariate-specific embeddings.[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 400,"It is also distracting that they call the embedding dimensions topics.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 402,"In the model section, the paragraphs otation and objective function and discussion are clear.[model section-POS], [CLA-POS]",model section,,,,,,CLA,,,,,POS,,,,,,POS,,,, 403,"I also liked the idea of having the section A geometric view of embeddings and tensor decomposition, but that section needs to be improved.[idea-POS, section-NEU], [EMP-POS]",idea,section,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 404,"For example, the authors describe RandWalk (Arora et al. 2016) but how their work falls into that framework is unclear.[work-NEU], [CMP-NEG]",work,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 405,"In the third paragraph, starting with Therefore we consider a natural extension of this model, ... it is unclear which model the authors are referring to. (RandWalk or their tensor factorization?).[model-NEG], [CMP-NEG, CLA-NEG]",model,,,,,,CMP,CLA,,,,NEG,,,,,,NEG,NEG,,, 406,"What are the context vectors in Figure 1? [Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 409,"In the last paragraph, beginning with Note that this is essentially saying..., I don't agree with the argument that the base embeddings decompose into independent topics.[paragraph-NEG], [EMP-NEG]",paragraph,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 410,"The dimensions of the base embeddings are some kind of latent attributes and each individual dimension could be used by the model to capture a variety of attributes.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 412,"Also, the qualitative results in Table 3 do not convince me that the embedding dimensions represent topics.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 415,"Hence, the apparent semantic coherence in what the authors call topics.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 416,"The authors present multiple qualitative and quantitative evaluations.[evaluations-POS], [SUB-POS]",evaluations,,,,,,SUB,,,,,POS,,,,,,POS,,,, 417,"The clustering by weight (4.1.) is nice and convincing that the model learns something useful.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 418,"4.2, the only quantitative analysis was missing some details.[quantitative analysis-NEG], [SUB-NEG]",quantitative analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 419,"Please give references for the evaluation metrics used, for proper credit and so people can look up these tasks.[references-NEG, tasks-NEU], [SUB-NEG]",references,tasks,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 420,"Also, comparison needed to fitting GloVe on the entire corpus (without covariates) and existing methods Rudolph et al. 2017 and Cotterell et al. 2017.[comparison-NEU], [CMP-NEU]",comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 422,"However, for the covariate specific analogies (5.3.) the authors could also analyze word similarities without the analogy component and probably see similar qualitative results.[qualitative results-NEU], [EMP-NEU]",qualitative results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 423,"Specifically, they could analyze for a set of query words, what the most similar words are in the embeddings obtained from different subsections of the data.[analyze-NEU], [EMP-NEU]",analyze,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 425,"+ the tensor factorization set-up ensures that the embedding dimensions are aligned + clustering by weights (4.1) is useful and seems coherent + covariate-specific analogies are a creative analysis[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 426,"CONS: - problem set-up not novel and existing approach not cited (experimental comparison needed)[problem setup-NEG], [NOV-NEG]",problem setup,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 427,"- interpretation of embedding dimensions as topics not convincing[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 428,"- connection to Rand-Walk (Aurora 2016) not stated precisely enough[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 429,"- quantitative results (Table 1) too little detail: * why is this metric appropriate[quantitative results-NEG], [EMP-NEU]",quantitative results,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 430,"? * comparison to GloVe on the entire corpus (not covariate specific) * no reference for the metrics used (AP, BLESS, etc.?)[comparison-NEG, reference-NEG], [CMP-NEG]",comparison,reference,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 431,"- covariate specific analogies presented confusingly and similar but simpler analysis might be possible by looking at variance in neighbours v_b and v_d without involving v_a and v_c (i.e. don't talk about analogies but about similarities)[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 436,"I'm not sure there is something specific I'm proposing here, I do understand the value of the formulation given in the work, I just find it strange that model based RL is not mention at all in the paper.[work-NEG], [EMP-NEG]",work,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 437,"I think reading the paper, it should be much clearer how the embedding is computed for Atari, and how this choice was made.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 438,"Going through the paper I'm not sure I know how this latent space is constructed.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 439,"This however should be quite important.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 440,"The goal function tries to predict states in this latent space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 441,"So the simpler the structure of this latent space, the easier it should be to train a goal function, and hence quickly adapt to the current reward scheme.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 449,"What hyper-parameters are used.[hyperparameters-NEG], [CLA-NEG]",hyperparameters,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 450,"What is the variance between the seeds.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 451,"I feel that while the proposed solution is very intuitive, and probably works as described,[proposed solution-POS], [EMP-POS]",proposed solution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 452,"the paper does not do a great job at properly comparing with baseline and make sure the results are solid.[paper-NEG, baseline-NEG, results-NEG], [CMP-NEG]",paper,baseline,results,,,,CMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 453,"In particular looking at Riverraid-new is the advantage you have there significant?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 454,"How does the game do on the original task?[task-NEU], [EMP-NEU]",task,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 455,"The plots could also use a bit of help.[plots-NEU], [EMP-NEU]",plots,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 456,"Lines should be thicker.[Lines-NEG], [PNF-NEG]",Lines,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 457,"Even when zooming, distinguishing between colors is not easy.[colors-NEG], [PNF-NEG]",colors,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 458,"Because there are more than two lines in some plots, it can also hurt people that can't distinguish colors easily.[lines-NEG, colors-NEG], [PNF-NEG]]",lines,colors,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 461,"In general, this is an interesting direction to explore, the idea is interesting,;[idea-POS], [IMP-POS]",idea,,,,,,IMP,,,,,POS,,,,,,POS,,,, 462,"however, I would like to see more experiments.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 465,"2. The experimental results are fairly weak compared to the other methods that also uses many layers.[experimental results-NEG, other methods-NEU], [CMP-NEU]",experimental results,other methods,,,,,CMP,,,,,NEG,NEU,,,,,NEU,,,, 466,"For PTB and Text8, the results are comparable to recurrent batchnorm with similar number of parameters, however the recurrent batchnorm model has only 1 layer, whereas the proposed architecture has 36 layers.[results-NEU, proposed architecture-NEU], [EMP-NEU]",results,proposed architecture,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 467,"3. It would also be nice to show results on tasks that involve long term dependencies, such as speech modeling.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 468,"4. If the authors could test out the new activation function on LSTMs, it would be interesting to perform a comparison between LSTM baseline, LSTM + new activation function, LSTM + recurrent batch norm.[comparison-NEU], [CMP-NEU]",comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 469,"5. It would be nice to see the gradient flow with the new activation function compared to the ones without.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 470,"6. The theorems and proofs are rather preliminary, they may not necessarily have to be presented as theorems.[theorems-NEG, proofs-NEG], [PNF-NEU]",theorems,proofs,,,,,PNF,,,,,NEG,NEG,,,,,NEU,,,, 474,"The resulting iterative inference framework is applied to a couple of small datasets and shown to produce both faster convergence and a better likelihood estimate.[framework-POS], [EMP-POS]",framework,,,,,,EMP,,,,,POS,,,,,,POS,,,, 475,"Although probably difficult for someone to understand that is not already familiar with VAE models, I felt that this paper was nonetheless clear and well-presented, with a fair amount of useful background information and context.[paper-POS], [CLA-POS, PNF-POS, CMP-POS]",paper,,,,,,CLA,PNF,CMP,,,POS,,,,,,POS,POS,POS,, 476,"From a novelty standpoint though, the paper is not especially strong given that it represents a fairly straightforward application of (Andrychowicz et al., 2016).[paper-NEG], [NOV-NEG, CMP-NEG]",paper,,,,,,NOV,CMP,,,,NEG,,,,,,NEG,NEG,,, 477,"Indeed the paper perhaps anticipates this perspective and preemptively offers that variational inference is a qualitatively different optimization problem than that considered in (Andrychowicz et al., 2016), and also that non-recurrent optimization models are being used for the inference task, unlike prior work.[prior work-NEG], [NOV-NEG, CMP-NEG]",prior work,,,,,,NOV,CMP,,,,NEG,,,,,,NEG,NEG,,, 478,"But to me, these are rather minor differentiating factors, since learning-to-learn is a quite general concept already, and the exact model structure is not the key novel ingredient.[model structure-NEG], [NOV-NEG, CMP-NEG]",model structure,,,,,,NOV,CMP,,,,NEG,,,,,,NEG,NEG,,, 479,"That being said, the present use for variational inference nonetheless seems like a nice application, and the paper presents some useful insights such as Section 4.1 about approximating posterior gradients.[paper-POS, Section-POS], [EMP-POS]",paper,Section,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 481,"While these results are enlightening,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 482,"most of the conclusions are not entirely unexpected.[conclusions-NEG], [EMP-NEG]",conclusions,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 483,"For example, given that the model is directly trained with the iterative inference criteria in place, the reconstructions from Fig. 4 seem like exactly what we would anticipate, with the last iteration producing the best result.[model-POS, Fig-POS, respect-POS], [EMP-POS]",model,Fig,respect,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 485,"And there is no demonstration of reconstruction quality relative to existing models, which could be helpful for evaluating relative performance.[existing models-NEG], [SUB-NEG, CMP-NEG]",existing models,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 487,"In terms of Fig. 5(b) and Table 1, the proposed approach does produce significantly better values of the ELBO critera; however, is this really an apples-to-apples comparison?[Table-POS, proposed approach-POS], [EMP-POS]",Table,proposed approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 488,"For example, does the standard VAE have the same number of parameters/degrees-of-freedom as the iterative inference model, or might eq. (4) involve fewer parameters than eq. (5) since there are fewer inputs? Overall, I wonder whether iterative inference is better than standard inference with eq. (4), or whether the recurrent structure from eq. (5) just happens to implicitly create a better neural network architecture for the few examples under consideration.[eq-NEU], [EMP-NEU]",eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 489,"In other words, if one plays around with the standard inference architecture a bit, perhaps similar results could be obtained.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 490,"Other minor comment: * In Fig. 5(a), it seems like the performance of the standard inference model is still improving[performance-POS], [EMP-POS]",performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 491,"but the iterative inference model has mostly saturated.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 492,"* A downside of the iterative inference model not discussed in the paper is that it requires computing gradients of the objective even at test time, whereas the standard VAE model would not.[model-NEG, paper-NEG], [SUB-NEG]]",model,paper,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 493,"This paper extends the previous results on differentially private SGD to user-level differentially private recurrent language models.[paper-NEU, previous results-NEU], [EMP-NEU]",paper,previous results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 494,"It experimentally shows that the proposed differentially private LSTM achieves comparable utility compared to the non-private model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 495,"The idea of training differentially private neural network is interesting and very important to the machine learning + differential privacy community.[idea-POS], [IMP-POS]",idea,,,,,,IMP,,,,,POS,,,,,,POS,,,, 496,"This work makes a pretty significant contribution to such topic.[work-POS, contribution-POS], [IMP-POS]",work,contribution,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 497,"It adapts techniques from some previous work to address the difficulties in training language model and providing user-level privacy.[techniques-NEU, previous work-NEU], [EMP-NEU]",techniques,previous work,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 498,"The experiment shows good privacy and utility.[experiment-POS], [EMP-POS]",experiment,,,,,,EMP,,,,,POS,,,,,,POS,,,, 499,"The presentation of the paper can be improved a bit.[presentation-NEG], [PNF-NEG]",presentation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 500,"For example, it might be better to have a preliminary section before Section2 introducing the original differentially private SGD algorithm with clipping, the original FedAvg and FedSGD, and moments accountant as well as privacy amplification; otherwise, it can be pretty difficult for readers who are not familiar with those concepts to fully understand the paper.[section-NEU], [PNF-NEU]",section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 501,"Such introduction can also help readers understand the difficulty of adapting the original algorithms and appreciate the contributions of this work. [introduction-NEU, contributions-NEU], [PNF-NEU]",introduction,contributions,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 505,"A nice series of experimental validations demonstrate the various types of interactions can be detected, while it also fairly clarifies the limitations.[experimental validations-POS, limitations-NEU], [EMP-POS]",experimental validations,limitations,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 507,"But given the flexibility of function representations, the use of neural networks would be worth rethinking, and this work would give one clear example. I liked the overall ideas which is clean and simple, but also found several points still confusing and unclear.[ideas-POS], [EMP-POS, CLA-NEU]",ideas,,,,,,EMP,CLA,,,,POS,,,,,,POS,NEU,,, 508,"1) One of the keys behind this method is the architecture described in 4.1.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 509,"But this part sounds quite heuristic, and it is unclear to me how this can affect to the facts such as Theorem 4 and Algorithm 1.[Theorem-NEG], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 510,"Absorbing the main effect is not critical to these facts?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 511,"In a standard sense of statistics, interaction would be something like residuals after removing the main (additive) effect. (like a standard test by a likelihood ratio test for models with vs without interactions)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 512,"2) the description about the neural network for the main effect is a bit unclear.[description-NEG], [EMP-NEG]",description,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 513,"For example, what does exactly mean the 'networks with univariate inputs for each input variable'?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 514,"Is my guessing that it is a 1-10-10-10-1 network (in the experiments) correct...? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 515,"Also, do g_i and g_i' in the GAM model (sec 4.3) correspond to the two networks for the main and interaction effects respectively?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 516,"3) mu is finally fixed at min function, and I'm not sure why this is abstracted throughout the manuscript.[manuscript-NEU], [EMP-NEU]",manuscript,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 517,"Is it for considering the requirements for any possible criteria?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 518,"Pros: - detecting (any order / any form of) statistical interactions by neural networks is provided.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 519,"- nice experimental setup and evaluations with comparisons to relevant baselines by ANOVA, HierLasso, and Additive Groves.[experimental setup-POS], [EMP-POS]",experimental setup,,,,,,EMP,,,,,POS,,,,,,POS,,,, 520,"Cons: - some parts of explanations to support the idea has unclear relationship to what was actually done, in particular, for how to cancel out the main effect.[explanations-NEG], [EMP-NEG]",explanations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 521,"- the neural network architecture with L1 regularization is a bit heuristic, and I'm not surely confident that this architecture can capture only the interaction effect by cancelling out the main effect. [architecture-NEG], [EMP-NEG]",architecture,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 524,"While the idea is sound,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 525,"many design choices of the system is questionable.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 526,"The problem is particularly aggravated by the poor presentation of the paper, creating countless confusions for readers.[presentation-NEG], [PNF-NEG]",presentation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 527,"I do not recommend the acceptance of this draft.[acceptance-NEG], [REC-NEG]",acceptance,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 528,"Compared with GAN, traditional graph analytics is model-specific and non-adaptive to training data.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 529,"This is also the case for hierarchical community structures.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 530,"By building the whole architecture on the Louvain method, the proposed method is by no means truly model-agnostic.[architecture-NEU, proposed method-NEG], [EMP-NEG]",architecture,proposed method,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 531,"In fact, if the layers are fine enough, a significant portion of the network structure will be captured by the sum-up module instead of the GAN modules, rendering the overall behavior dominated by the community detection algorithm.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 532,"The evaluation remains superficial with minimal quantitative comparisons.[evaluation-NEG], [CMP-NEG, SUB-NEG]",evaluation,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 533,"Treating degree distribution and clustering coefficient (appeared as cluster coefficient in draft) as global features is problematic.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 535,"The writing of the draft leaves much to be desired.[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 536,"The description of the architecture is confusing with design choices never clearly explained.[description-NEG, architecture-NEU], [PNF-NEG]",description,architecture,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 537,"Multiple concepts needs better introduction, including the very name of their model GTI and the idea of stage identification.[concepts-NEG], [EMP-NEG]",concepts,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 538,"Not to mention numerous grammatical errors, I suggest the authors seek professional English writing services.[grammatical errors-NEG], [CLA-NEG]",grammatical errors,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 541,"Firstly, I suggest the authors rewrite the end of the introduction.[introduction-NEG], [CLA-NEG]",introduction,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 542,"The current version tends to mix everything together and makes the misleading claim.[claim-NEG], [CLA-NEG]",claim,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 543,"When I read the paper, I thought the speeding up mechanism could give both speed up and performance boost, and lead to the 82.2 F1.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 544,"But it turns out that the above improvements are achieved with at least three different ideas: (1) the CNN+self-attention module; (2) the entire model architecture design; and (3) the data augmentation method.[improvements-NEU], [EMP-NEU]",improvements,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 545,"Secondly, none of the above three ideas are well evaluated in terms of both speedup and RC performance, and I will comment in details as follows:[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 546,"(1) The CNN+self-attention was mainly borrowing the idea from (Vaswani et al., 2017a) from NMT to RC.[idea-NEU], [NOV-NEU]",idea,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 547,"The novelty is limited[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 548,"but it is a good idea to speed up the RC models.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 549,"However, as the authors hoped to claim that this module could contribute to both speedup and RC performance, it will be necessary to show the RC performance of the same model architecture, but replacing the CNNs with LSTMs.[performance-NEU, model architecture-NEU], [SUB-NEU, EMP-NEU]",performance,model architecture,,,,,SUB,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 550,"Only if the proposed architecture still gives better results, the claims in the introduction can be considered correct.[proposed architecture-NEU, results-NEU, claims-NEU], [EMP-NEU]",proposed architecture,results,claims,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 551,"(2) I feel that the model design is the main reason for the good overall RC performance.[model design-NEU, performance-NEU], [EMP-NEU]",model design,performance,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 552,"However, in the paper there is no motivation about why the architecture was designed like this.[motivation-NEG, architecture-NEU], [SUB-NEG]",motivation,architecture,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 553,"Moreover, the whole model architecture is only evaluated on the SQuAD dataset.[dataset-NEG], [SUB-NEG, EMP-NEG]",dataset,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 554,"As a result, it is not convincing that the system design has good generalization.[system design-NEG], [EMP-NEG]",system design,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 555,"If in (1) it is observed that using LSTMs in the model instead of CNNs could give on par or better results, it will be necessary to test the proposed model architecture on multiple datasets, as well as conducting more ablation tests about the model architecture itself.[results-NEU, proposed model architecture-NEU, datasets-NEU], [EMP-NEG]",results,proposed model architecture,datasets,,,,EMP,,,,,NEU,NEU,NEU,,,,NEG,,,, 556,"(3) I like the idea of data augmentation with paraphrasing.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 557,"Currently, the improvement is only marginal,[improvement-NEU], [EMP-NEU]",improvement,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 559,"For example, training NMT models with larger parallel corpora; training NMT models with different language pairs with English as the pivot; and better strategies to select the generated passages for data augmentation.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 560,"n I am looking forward to the test performance of this work on SQuAD.[performance-NEU], [SUB-NEU]",performance,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 565,"They present experimental results on their own data set, evaluating only against simpler baselines of their own VAE approach, not the pre-existing KB methods.[experimental results-NEG, baselines-NEG], [EMP-NEG]",experimental results,baselines,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 568,"I don't find the argument convincing.[argument-NEG], [EMP-NEG]",argument,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 572,"But the same efficient search is possible in many of the classic discriminatively-trained KB completion models also.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 573,"It is admirable that the authors use an interesting (and to my knowledge novel) data set.[data set-POS], [EMP-POS]",data set,,,,,,EMP,,,,,POS,,,,,,POS,,,, 574,"But the method should also be evaluated on multiple now-standard data sets, such as FB15K-237 or NELL-995.[method-NEG, data sets-NEG], [SUB-NEG, EMP-NEG]",method,data sets,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 575,"The method is evaluated only against their own VAE-based alternatives. [method-NEG], [SUB-NEG]",method,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 576,"It should be evaluated against multiple other standard KB completion methods from the literature, such as Jason Weston's Trans-E, Richard Socher's Tensor Neural Nets, and Neelakantan's RNNs. [literature-NEG], [CMP-NEG, SUB-NEG]",literature,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 578,"The authors disclosed their identity and violated the terms of double blind reviews. Page 2 In our previous work (Aly & Dugan, 2017) Also the page 1 is full of typos and hard to read.[page-NEG, typos-NEG], [PNF-NEG, CLA-NEG]",page,typos,,,,,PNF,CLA,,,,NEG,NEG,,,,,NEG,NEG,,, 582,"All in all the paper is very clear and interesting.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 583,"The results obtained on the considered problem are clearly of great interest, especially when compared to state-of-the-art assimilation strategies such as the one of Bu00e9ru00e9ziat.[results-POS], [CMP-POS, IMP-POS]",results,,,,,,CMP,IMP,,,,POS,,,,,,POS,POS,,, 584,"While the learning architecture is not original in itself, it is shown that a proper physical regularization greatly improves the performance.[architecture-POS], [EMP-POS, NOV-NEU]",architecture,,,,,,EMP,NOV,,,,POS,,,,,,POS,NEU,,, 585,"For these reasons I believe the paper has sufficient merits to be published at ICLR. [paper-POS], [APR-POS, REC-POS]",paper,,,,,,APR,REC,,,,POS,,,,,,POS,POS,,, 586,"That being said, I believe that some discussions could strengthen the paper: - Most classical variational assimilation schemes are stochastic in nature, notably by incorporating uncertainties in the observation or physical evolution models.[discussions-NEU], [EMP-NEU]",discussions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 587,"It is still unclear how those uncertainties can be integrated in the model[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 588,"; - Assimilation methods are usually independent of the type of data at hand.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 589,"It is not clear how the model learnt on one particular type of data transpose to other data sequences. [model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 590,"Notably, the question of transfer and generalization is of high relevance here.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 592,"I believe this type of issue has to be examinated for this type of approach to be widely use in inverse physical problems.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 594,"**** I acknowledge the author's comments and improve my score to 7.[score-POS], [REC-POS]",score,,,,,,REC,,,,,POS,,,,,,POS,,,, 598,"The paper ultimately is light on comprehensive evaluation of popular models on a variety of datasets and as such does not quite yield the insights it could.[comprehensive evaluation-NEG], [SUB-NEG]",comprehensive evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 599,"Significance: The proposed methodology is relevant, because disentangling representations are an active field of research and currently are not evaluated in a standardized way.[proposed methodology-POS], [EMP-POS]",proposed methodology,,,,,,EMP,,,,,POS,,,,,,POS,,,, 600,"Clarity: The paper is lucidly written and very understandable.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 601,"Quality: The authors use formal concepts from information theory to underpin their basic idea of recovering latent factors and have spent a commendable amount of effort on clarifying different aspects on why these three measures are relevant.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 602,"A few comments: 1. How do the authors propose to deal with multimodal true latent factors?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 603,"What if multiple sets of z can generate the same observations and how does the evaluation of disentanglement fairly work if the underlying model cannot be uniquely recovered from the data?[evaluation-NEU], [EMP-NEU]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 604,"2. Scoring disentanglement against known sources of variation is sensible and studied well here, but how would the authors evaluate or propose to evaluate in datasets with unknown sources of variation?[datasets-NEU], [EMP-NEU]",datasets,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 605,"3. the actual sources of variation are interpretable and explicit measurable quantities here.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 606,"However, oftentimes a source of variation can be a variable that is hard or impossible to express in a simple vector z (for instance the sentiment of a scene) even when these factors are known.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 607,"How do the authors propose to move past narrow definitions of factors of variation and handle more complex variables?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 608,"Arguably, disentangling is a step towards concept learning and concepts might be harder to formalize than the approach taken here where in the experiment the variables are well-behaved and relatively easy to quantify since they relate to image formation physics. 4. For a paper introducing a formal experimental framework and metrics or evaluation I find that the paper is light on experiments and evaluation.[approach-NEU, experiments-NEG, evaluation-NEG], [SUB-NEG]",approach,experiments,evaluation,,,,SUB,,,,,NEU,NEG,NEG,,,,NEG,,,, 609,"I would hope that at the very least a broad range of generative models and some recognition models are used to evaluate here, especially a variational autoencoder, beta-VAE and so on.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 610,"Furthermore the authors could consider applying their framework to other datasets and offering a benchmark experiment and code for the community to establish this as a means of evaluation to maximize the impact of a paper aimed at reproducibility and good science.[datasets-NEU, benchmark experiment-NEU], [IMP-NEU]",datasets,benchmark experiment,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 611,"Novelty: Previous papers like beta-VAE (Higgins et al. 2017) and Bayesian Representation Learning With Oracle Constraints by Karaletsos et al (ICLR 16) have followed similar experimental protocols inspired by the same underlying idea of recovering known latent factors, but have fallen short of proposing a formal framework like this paper does.[Novelty-POS], [NOV-POS]",Novelty,,,,,,NOV,,,,,POS,,,,,,POS,,,, 612,"It would be good to add a section gathering such attempts at evaluation previously made and trying to unify them under the proposed framework. [section-NEU], [SUB-NEU, CMP-NEU]",section,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 625,"Evaluation: Significance: The question whether GANs learn the target distribution is important and any significant contribution to this discussion is of value.[contribution-POS, discussion-POS], [EMP-POS]",contribution,discussion,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 626,"Clarity: The paper is written well and the issues raised are well motivated and proper background is given.[paper-POS], [CLA-POS, IMP-POS]",paper,,,,,,CLA,IMP,,,,POS,,,,,,POS,POS,,, 627,"Originality: The main idea of trying to estimate the size of the support using a few samples by using birthday theorem seems new.[main idea-POS], [EMP-POS]",main idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 628,"Quality: The main idea of this work is to give a estimation technique for the support size for the output distribution of GANs. [main idea-NEU], [EMP-NEU]",main idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 631,"The relevance of this problem is that there are auctions for plate numbers in Hong Kong, and predicting their value is a sensible activity in that context.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 632,"I find that the description of the applied problem is quite interesting; in fact overall the paper is well written and very easy to follow.[description-POS, problem-POS, paper-POS], [CLA-POS]",description,problem,paper,,,,CLA,,,,,POS,POS,POS,,,,POS,,,, 633,"There are some typos and grammatical problems (indicated below),[typos-NEG], [PNF-NEG]",typos,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 634,"but nothing really serious.[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 635,"So, the paper is relevant and well presented.[paper-POS], [PNF-POS]",paper,,,,,,PNF,,,,,POS,,,,,,POS,,,, 636,"However, I find that the proposed solution is an application of existing techniques, so it lacks on novelty and originality.[proposed solution-NEG], [NOV-NEG, CMP-NEG]",proposed solution,,,,,,NOV,CMP,,,,NEG,,,,,,NEG,NEG,,, 637,"Even though the significance of the work is apparent given the good results of the proposed neural network,[work-POS, results-POS], [IMP-POS]",work,results,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 638,"I believe that such material is more appropriate to a focused applied meeting.[material-NEU], [APR-NEU]",material,,,,,,APR,,,,,NEU,,,,,,NEU,,,, 639,"However, even for that sort of setting I think the paper requires some additional work, as some final parts of the paper have not been tested yet (the interesting part of explanations).[paper-NEG, parts-NEG], [SUB-NEG]",paper,parts,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 640,"Hence I don't think the submission is ready for publication at this moment.[submission-NEG], [APR-NEG, REC-NEG]",submission,,,,,,APR,REC,,,,NEG,,,,,,NEG,NEG,,, 641,"Concerning the text, some questions/suggestions: - Abstract, line 1: I suppose In the Chinese society...--- are there many Chinese societies?[Abstract-NEU, line-NEU], [PNF-NEU]",Abstract,line,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 642,"- The references are not properly formatted; they should appear at (XXX YYY) but appear as XXX (YYY) in many cases, mixed with the main text.[references-NEG], [PNF-NEG]",references,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 643,"- Footnote 1, line 2: an exchange.[Footnote-NEU, line-NEU], [PNF-NEU]",Footnote,line,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 644,"- Page 2, line 12: prices.[Page-NEG, line-NEG], [PNF-NEG]",Page,line,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 645,"Among. - Please add commas/periods at the end of equations.[equations-NEU], [PNF-NEU]",equations,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 646,"- There are problems with capitalization in the references.[references-NEG], [PNF-NEG]]",references,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 648,"This algorithm can be useful because correct annotation of enough cases to train a deep model in many domains is not affordable.[algorithm-POS], [IMP-POS]",algorithm,,,,,,IMP,,,,,POS,,,,,,POS,,,, 651,"The paper is well written, easy to follow, and have good experimental study.[paper-POS, experimental study-POS], [CLA-POS, EMP-POS]",paper,experimental study,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 652,"My main problem with the paper is the lack of enough motivation and justification for the proposed method; the methodology seems pretty ad-hoc to me and there is a need for more experimental study to show how the methodology work[paper-NEG, proposed method-NEG, methodology-NEG], [SUB-NEG]. Here are some questions that comes to my mind: (1) Why first building a student model only using the weak data and why not all the data together to train the student model?[student model-NEU, data-NEU], [EMP-NEU]",paper,proposed method,methodology,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 653,"To me, it seems that the algorithm first tries to learn a good representation for which lots of data is needed and the weak training data can be useful but why not combing with the strong data?[algorithm-NEU, data-NEU], [EMP-NEU]",algorithm,data,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 654,"(2) What are the sensitivity of the procedure to how weakly the weak data are annotated (this could be studied using both toy example and real-world examples)?[procedure-NEU], [EMP-NEU]",procedure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 655,"(3) The authors explicitly suggest using an unsupervised method (check Baseline no.1) to annotate data weakly?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 656,"Why not learning the representation using an unsupervised learning method (unsupervised pre training)?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 657,"This should be at least one of the baselines.[baselines-NEU], [EMP-NEU]",baselines,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 658,"(4) the idea of using surrogate labels to learn representation is also not new.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 659,"One example work is Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks. The authors didn't compare their method with this one.[example work-NEU, method-NEU], [CMP-NEU]]",example work,method,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 663,"Overall, although the result are not very surprising, the approach is well justified and extensively tested.[result-NEG, approach-POS], [EMP-POS]",result,approach,,,,,EMP,,,,,NEG,POS,,,,,POS,,,, 665,"Comments: t 1-tThe results are somewhat unsurprising: as we are able to learn generative models of each tasks, we can use them to train on all tasks at the same time, a beat algorithms that do not use this replay approach.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 666,"2-tIt is unclear whether the approach provides a benefit for a particular application: as the task information has to be available, training separate task-specific architectures or using classical multitask learning approaches would not suffer from catastrophic forgetting and perform better (I assume).[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 667,"3-tSo the main benefit of the approach seems to point towards the direction of what possibly happens in real brains.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 668,"It is interesting to see how authors address practical issues of training based on replay and it show two differences with real brains: 1/ what we know about episodic memory consolidation (the system modeled in this paper) is closer to unsupervised learning, as a consequence information such as task ID and dictionary for balancing samples would not be available, 2/ the cortex (long term memory) already learns during wakefulness, while in the proposed algorithm this procedure is restricted to replay-based learning during sleep.[issues-POS], [EMP-POS]",issues,,,,,,EMP,,,,,POS,,,,,,POS,,,, 669,"4-tDue to these differences, I my view, this work avoids addressing directly the most critical and difficult issues of catastrophic forgetting, which relates more to finding optimal plasticity rules for the network in an unsupervised setting.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 670,"5-tThe writing could have been more concise and the authors could make an effort to stay closer to the recommended number of pages.[writing-NEU], [CLA-NEU, PNF-NEU]",writing,,,,,,CLA,PNF,,,,NEU,,,,,,NEU,NEU,,, 678,"This is a timely and interesting topic.[topic-POS], [EMP-POS]",topic,,,,,,EMP,,,,,POS,,,,,,POS,,,, 679,"I enjoyed learning about the authors' proposed approach to a practical learning method based on the information bottleneck.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 680,"However, the writing made it challenging and the experimental protocol raised some serious questions.[writing-NEG, experimental protocol-NEU], [CLA-NEG]",writing,experimental protocol,,,,,CLA,,,,,NEG,NEU,,,,,NEG,,,, 681,"In summary, I think the paper needs very careful editing for grammar and language and, more importantly, it needs solid experiments before it's ready for publication.[grammar-NEG, experiments-NEU], [CLA-NEG, SUB-NEG]",grammar,experiments,,,,,CLA,SUB,,,,NEG,NEU,,,,,NEG,NEG,,, 682,"When that is done it would make an exciting contribution to the community.[contribution-NEU], [IMP-NEU]",contribution,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 685,"The PIB objective is new and different to the other objectives.[objective-NEU], [EMP-NEU]",objective,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 686,"Do all objectives happen to yield their best performance under the same LR?[performance-NEU], [EMP-NEU, CMP-NEU]",performance,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEU,,, 687,"Maybe so, but we won't know unless the experimental protocol prescribes a sufficient range of LRs for each architecture.[experimental protocol-NEU], [EMP-NEU]",experimental protocol,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 688,"In light of this, the fact that SFNN is given extra epochs in Figure 4 does not mean much.[Figure-NEU], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 691,"Why did the authors make this choice?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 692,"Is 8 good for architectures A through E? 3.[architectures-NEU], [SUB-NEU]",architectures,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 693,"On a related note, the authors only seem to report results from a single random seed (ie. deterministic architectures are trained exactly once).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 694,"I would like to see results from a few different random seeds.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 695,"As a result of comments 1,2,3, even though I do believe in the merit of the intuition pursued and the techniques proposed, I am not convinced about the main claim of the paper.[techniques-POS, main claim-NEG], [EMP-NEG]",techniques,main claim,,,,,EMP,,,,,POS,NEG,,,,,NEG,,,, 696,"In particular, the experiments are not rigorous enough to give serious evidence that PIBs improve generalization and training speed.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 697,"4. The paper needs some careful editing both for language (cf. following point) but also notation.[language-NEG, notation-NEG], [CLA-NEG, PNF-NEG]",language,notation,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 698,"The authors use notation p_D() in eqn (12) without defining it.[notation-NEG], [PNF-NEG]",notation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 699,"My best guess is that it is the same as p_u(), the underlying data distribution, but makes parsing the paper hard.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 700,"Finally there are a few steps that are not explained: for example, no justification is given for the inequality in eqn (13). 5.[justification-NEG, eqn-NEG], [EMP-NEG, SUB-NEG]",justification,eqn,,,,,EMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 701,"Language: the paper needs some careful editing to correct numerous language/grammar issues.[issues-NEG], [CLA-NEG]",issues,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 702,"At times it is detrimental to understanding.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 703,"For example I had to read the text leading up to eqn (8) a number of times.[text-NEG, eqn-NEG], [CLA-NEG]",text,eqn,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 704,"6. There is no discussion of computational complexity and wall-clock time comparisons.[discussion-NEG], [SUB-NEG, CMP-NEG]",discussion,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 705,"To be clear, I think that even if the proposed approach were to be slower than the state of the art it would still be very interesting.[proposed approach-POS], [IMP-POS]",proposed approach,,,,,,IMP,,,,,POS,,,,,,POS,,,, 706,"However, there should be some discussion and reporting of that aspect as well.[discussion-NEU], [CMP-NEU, SUB-NEU]",discussion,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 707,"Minor comments and questions: 7. Mutual information is typically typeset using a semicolon instead of a comma, eg. I(X;Z). 8.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 708,"Why is the mutual information in Figure 3 so low?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 709,"Are you perhaps using natural logarithms to estimate and plot I(Z;Y)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 710,"If this is base-2 logarithms I would expect a value close to 1. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 714,"The paper is fairly clear and these extensions are reasonable[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 715,". However, I just don't think the focus on 2D grid-based navigation has sufficient interest and impact[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 716,". It's true that the original VIN paper worked in a grid-navigation domain, but they also had a domain with a fairly different structure; I believe they used the gridworld because it was a convenient initial test case, but not because of its inherent value.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 717,"So, making improvements to help solve grid-worlds better is not so motivating[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 718,". It may be possible to motivate and demonstrate the methods of this paper in other domains, however.[methods-NEU], [EMP-NEU]",methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 719,"The work on dynamic environments was an interesting step: it would have been interesting to see how the models learned for the dynamic environments differed from those for static environments.[null], [CMP-POS]",null,,,,,,CMP,,,,,,,,,,,POS,,,, 724,"Superior performance to recent baselines (e.g. EWC) is reported in several cases.[performance-POS], [CMP-POS]",performance,,,,,,CMP,,,,,POS,,,,,,POS,,,, 726,"Unfortunately, the paper does not go beyond the relatively simplistic setup of sequential MNIST, in contrast to some of the methods used as baselines.[null], [CMP-NEU, EMP-NEG]",null,,,,,,CMP,EMP,,,,,,,,,,NEU,NEG,,, 727,"The proposed architecture implicitly reduces the continual learning problem to a classical multitask learning (MTL) setting for the LTM, where (in the best case scenario) i.i.d. data from all encountered tasks is available during training. This setting is not ideal, though.[architecture-NEU, setting-NEG], [EMP-NEU]",architecture,setting,,,,,EMP,,,,,NEU,NEG,,,,,NEU,,,, 728,"There are several example of successful multitask learning, but it does not follow that a random grouping of several tasks immediately leads to successful MTL.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 729,"Indeed, there is good reason to doubt this in both supervised and reinforcement learning domains.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 731,"I agree that problems can be constructed where these assumptions hold, but this core assumption is limiting.[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 732,"The requirement of task labels also rules out important use cases such as following a non-stationary objective function, which is important in several realistic domains, including deep RL.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 740,"Experiments show a clear advantage during learning when compared with a vanilla DQN. [Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 741,"Nonetheless, there are some criticisms than can be made of both the method and the evaluations:[method-NEU, evaluations-NEU], [EMP-NEU]",method,evaluations,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 742,"The fear radius threshold k_r seems to add yet another hyperparameter that needs tuning.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 743,"Judging from the description of the experiments this parameter is important to the performance of the method and needs to be set experimentally.[description-NEU, experiments-NEU, parameter-NEU, performance-NEU, method-NEU], [EMP-NEU]",description,experiments,parameter,performance,method,,EMP,,,,,NEU,NEU,NEU,NEU,NEU,,NEU,,,, 744,"There seems to be no way of a priori determine a good distance as there is no way to know in advance when a catastrophe becomes unavoidable.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 745,"No empirical results on the effect of the parameter are given.[empirical results-NEG], [SUB-NEG, EMP-NEG]",empirical results,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 746,"The experimental results support the claim that this technique helps to avoid catastrophic states during initial learning.[experimental results-POS, claim-NEU, technique-POS], [EMP-POS]",experimental results,claim,technique,,,,EMP,,,,,POS,NEU,POS,,,,POS,,,, 747,"The paper however, also claims to address the longer term problem of revisiting these states once the learner forgets about them, since they are no longer part of the data generated by (close to) optimal policies.[paper-NEU], [IMP-NEU]",paper,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 748,"This problem does not seem to be really solved by this method.[problem-NEU, method-NEG], [EMP-NEG]",problem,method,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 749,"Danger and safe state replay memories are kept, but are only used to train the catastrophe classifier.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 750,"While the catastrophe classifier can be seen as an additional external memory, it seems that the learner will still drift away from the optimal policy and then need to be reminded by the classifier through penalties.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 751,"As such the method wouldn't prevent catastrophic forgetting, it would just prevent the worst consequences by penalizing the agent before it reaches a danger state.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 752,"It would therefore be interesting to see some long running experiments and analyse how often catastrophic states (or those close to them) are visited.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 753,"Overall, the current evaluations focus on performance and give little insight into the behaviour of the method.[evaluations-NEG, performance-NEU, method-NEU], [SUB-NEG, EMP-NEG]",evaluations,performance,method,,,,SUB,EMP,,,,NEG,NEU,NEU,,,,NEG,NEG,,, 755,"In general the explanations in the paper often often use confusing and imprecise language, even in formal derivations, e.g. 'if the fear model reaches arbitrarily high accuracy' or 'if the probability is negligible'.[explanations-NEG], [CLA-NEG]",explanations,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 756,"It is wasn't clear to me that the properties described in Theorem 1 actually hold.[Theorem-NEG], [CLA-NEG]",Theorem,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 757,"The motivation in the appendix is very informal and no clear derivation is provided.[motivation-NEG], [PNF-NEG]",motivation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 758,"The authors seem to indicate that a minimal return can be guaranteed because the optimal policy spends a maximum of epsilon amount of time in the catastrophic states and the alternative policy simply avoids these states.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 759,"However, as the alternative policy is learnt on a different reward, it can have a very different state distribution, even for the non-catastrophics states.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 760,"It might attach all its weight to a very poor reward state in an effort to avoid the catastrophe penalty.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 761,"It is therefore not clear to me that any claims can be made about its performance without additional assumptions.[performance-NEU, assumptions-NEU], [EMP-NEG]",performance,assumptions,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 767,"This seems to contradict the theorem.[theorem-NEG], [EMP-NEG]",theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 768,"It wasn't clear what assumptions the authors make to exclude situations like this.[assumptions-NEG], [EMP-NEG]",assumptions,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 776,"However, I have the following concerns about the quality and the significance: - The proposed formulation in Equation (2) is questionable.[Equation-NEU], [EMP-NEU]",Equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 779,"Since this approach is not straightforward, more theoretical analysis of the proposed method is desirable.[approach-NEU, theoretical analysis-NEU], [EMP-NEU]",approach,theoretical analysis,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 780,"- In addition to the above point, I guess the expectation is needed as the original formulation of GAN.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 781,"Otherwise the proposed formulation does not make sense as it receives only specific data points and how to accumulate objective values across data points is not defined.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 782,"- In experiments, although the authors say lots of datasets are used, only two datasets are used, which is not enough to examine the performance of outlier detection methods.[experiments-NEG, datasets-NEG], [SUB-NEG]",experiments,datasets,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 783,"Moreover, outliers are artificially generated in these datasets, hence there is no evaluation on pure real-world datasets.[evaluation-NEG, datasets-NEU], [EMP-NEG]",evaluation,datasets,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 784,"To achieve the better quality of the paper, I recommend to add more real-world datasets in experiments.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 785,"- As discussed in Section 2, there are already many outlier detection methods, such as distance-based outlier detection methods, but they are not compared in experiments.[Section-NEU, experiments-NEU], [CMP-NEG]",Section,experiments,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 786,"Although the authors argue that distance-based outlier detection methods do not work well for high-dimensional data, this is not always correct[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 787,". Please see the paper: -- Zimek, A., Schubert, E., Kriegel, H.-P., A survey on unsupervised outlier detection in high-dimensional numerical data, Statistical Analysis and Data Mining (2012) This paper shows that the performance gets even better for higher dimensional data if each feature is relevant.[performance-NEU], [CMP-NEU]",performance,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 788,"I recommend to add some distance-based outlier detection methods as baselines in experiments.[baselines-NEU], [CMP-NEU]",baselines,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 789,"- Since parameter tuning by cross validation cannot be used due to missing information of outliers, it is important to examine the sensitivity of the proposed method with respect to changes in its parameters (a_new, lambda, and others).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 790,"Otherwise in practice how to set these parameters to get better results is not obvious.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 791,"* The clarity of this paper is not high as the proposed method is not well explained.[clarity-NEG, proposed method-NEG], [CLA-NEG, EMP-NEG]",clarity,proposed method,,,,,CLA,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 792,"In particular, please mathematically formulate each proposed technique in Section 4.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 793,"* Since the proposed formulation is not convincing due to the above reasons and experimental evaluation is not thorough, the originality is not high.[originality-NEG], [NOV-NEU]",originality,,,,,,NOV,,,,,NEG,,,,,,NEU,,,, 794,"Minor comments: - P.1, L.5 in the third paragraph: architexture -> architecture[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 797,"Although the paper has been improved, I keep my rating due to the insufficient experimental evaluation.[rating-NEU, experimental evaluation-NEG], [REC-NEU, EMP-NEG]",rating,experimental evaluation,,,,,REC,EMP,,,,NEU,NEG,,,,,NEU,NEG,,, 802,"The idea has some novelty and the results on several tasks attempting to prove its effectiveness against systems that handle named entities in a static way.[idea-POS, results-POS], [NOV-POS]",idea,results,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 803,"One thing I hope the author could provide more clarification is the use of NER.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 804,"For example, the experimental result on structured QA task (section 3.1), where it states that the performance different between models of With-NE-Table and W/O-NE-Table is positioned on the OOV NEs not present in the training subset.[experimental result-NEG], [EMP-NEG]",experimental result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 805,"To my understanding, because of the presence of the NER in the With-NE-Table model, you could directly do update to the NE embeddings and query from the DB using a combination of embedding and the NE words (as the paper does), whereas the W/O-NE-Table model cannot because of lack of the NER.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 806,"This seems to prove that an NER is useful for tasks where DB queries are needed, rather than that the dynamic NE-Table construction is useful.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 807,"You could use an NER for W/O-NE-Table and update the NE embeddings, and it should be as good as With-NE-Table model (and fairer to compare with too).[null], [CMP-NEU, EMP-NEU]",null,,,,,,CMP,EMP,,,,,,,,,,NEU,NEU,,, 808,"That said, overall the paper is a nice contribution to dialogue and QA system research by pointing out a simple way of handling named entities by dynamically updating their embeddings.[contribution-POS], [IMP-POS]",contribution,,,,,,IMP,,,,,POS,,,,,,POS,,,, 809,"It would be better if the paper could point out the importance of NER for user utterances, and the fact that using the knowledge of which words are NEs in dialogue models could help in tasks where DB queries are necessary.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 814,"Although I found the results useful and potentially promising,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 815,"I did not find much insight in this paper.[insight-NEU], [EMP-NEU]",insight,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 816,"It was not clear to me why scatter (the way it is defined in the paper) would be a useful performance proxy anywhere but the first classification layer.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 817,"Once the signals from different windows are intermixed, how do you even define the windows?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 818,"Minor Second line of Section 2.1: ""lesser"" -> less or fewer [Second line-NEU, Section-NEU], [PNF-NEU]",Second line,Section,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 823,"What I like about the approach is the investigation of the interplay between unsupervised and hierarchical supervised learning in a biological context.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 824,"I agree with the authors that an integrated view of self-organization and learning across layers is presumably required to better understand biological learning.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 825,"The general methodology also makes sense to me.[methodology-POS], [EMP-POS]",methodology,,,,,,EMP,,,,,POS,,,,,,POS,,,, 826,"However, I do have concerns including two major concerns: (A) delimitation of results from earlier work; (B) numerical results (especially Tab. 1).[results-NEG, earlier work-NEU], [CMP-NEG]",results,earlier work,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 827,"(A) The paper derives the main update equation of W which combines self-organization and label-sensitive learning - Eqn. 15.[paper-NEU, Eqn-NEU], [CMP-NEU]",paper,Eqn,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 829,"The paper also states (Secs. 1 and 2) that the the network studied here is based on Hartono et al, 2015, with the main difference of the sigmoidal ouput layer being replaced by a softmax layer.[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 830,"What is missing is a discussion of the differences regarding the later numerical experiments, and a clear delimitation to Hartono et al., 2015, when Eqn. 15 is discussed.[discussion-NEG, numerical experiments-NEU], [SUB-NEG]",discussion,numerical experiments,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 831,"What is the major structural difference to their Eqn. 13 which is discussed along very similar lines as Eqn. 15 of this paper.[Eqn-NEU], [CMP-NEU]",Eqn,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 833,"(B) A further difference to Hartono et al, 2015, are comparisons with multi-layer networks, and the presentation and discussion of this comparison is my strongest concern.[presentation-NEG, discussion-NEG, comparison-NEG], [CMP-NEG, PNF-NEG]",presentation,discussion,comparison,,,,CMP,PNF,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 836,"What I do not understand are then the high classification errors reported in Tab. 1.[errors-NEG, Tab-NEU], [EMP-NEG]",errors,Tab,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 837,"It is known that even basic multi-layer perceptrons (MLPs) result in much lower classification errors, e.g., for MNIST. LeCun et al., 1998, is a classical example with less then 3% error on MNIST with many later examples that improve on these.[errors-NEU], [EMP-NEU, CMP-NEU]",errors,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEU,,, 839,"Why are the classification errors for DBN and MLP in the Tab 1 so high?[errors-NEG], [EMP-NEG]",errors,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 840,"And if they are in reality much lower, then competitiveness of s-rRBF in terms of classification results to these systems is questionable.[classification results-NEG], [EMP-NEG]",classification results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 841,"The table makes me having doubts regarding the competitiveness of S-rRBF.[table-NEG], [EMP-NEG]",table,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 842,"I therefore disagree with the conclusion that this paper has shown that S-rRBFs are comparable to the best performer for most of the diverse benchmark applications (last paragraph in Conclusion).[conclusion-NEG], [CMP-NEG]",conclusion,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 844,"More generally, putting the biological arguments aside, why would a 2D neighborhood relationship be helpful?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 846,"Also, if there is an intrinsic 2D hidden structure in the data, then imposing a 2D representation can help (as a sort of a prior).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 847,"But in general there may not be a 2D intrinsic property, or there is a higher dimensional hidden structure - so why not 3D or more? Related to this, why not using an objective that would result in a dynamics similar to a growing neural gas instead of an SOM?[objective-NEU], [EMP-NEG]",objective,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 848,"Minor: The work is first introduced as multi-layer but only the single hidden layer case is actually discussed.[work-NEU], [EMP-NEU]",work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 849,"I would suggest to either really add multi-hidden-layer results (which is not really doable in a conference revision), or state multi-layer work as outlook.[results-NEU], [EMP-NEU, SUB-NEG]",results,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEG,,, 850,"Fig. 5, bad readability of axes labels.[Fig-NEG], [CLA-NEG, PNF-NEG]",Fig,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 851,"is a hierarchical -> are hierarchical yields -> yield twice otherwise after Eqn. 7 are can be viewed they occurs can can readily expanded transfer transform [Eqn-NEG], [CLA-NEG]",Eqn,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 855,"This paper reads well and the results appear sound.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 856,"Unfortunately, the contribution seems rather small to be accepted for ICLR.[contribution-NEG], [APR-NEG]",contribution,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 857,"This is a straight application and combination of existing pieces with not much originality and without being backed up by very strong experimental results.[originality-NEU, experimental results-NEU], [NOV-NEU, EMP-NEU]",originality,experimental results,,,,,NOV,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 858,"* Having only results on new datasets makes it hard to compare the objective quality of the DistMult baselines and hence of the improvements due to the multimodal info.[results-NEG], [CMP-NEG]",results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 859,"Isn't there any existing benchmark where this could have an impact?[benchmark-NEU], [IMP-NEU]",benchmark,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 860,"* The much better performance of ConvE is worrying there.[performance-NEU], [CMP-NEU, EMP-NEU]",performance,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 861,"It is suggested that the proposed approach could be incorporated in ConvE to lead to similar improvements than on DistMult. The paper would be much stronger with those.[proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 862,"* Are we sure that the textual description do not explicitly contain the information of the triple to be predicted?[description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 863,"This would explain the massive gains in Yago.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 864,"* For Table 8, the similarities are not striking.[Table-NEG], [EMP-NEG]",Table,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 865,"What were the nearest neighboring posters in the original VGG space? They should not be that bad too.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 866,"* The work on multimodal embeddings like Multimodal Distributional Semantics by Bruni et al. or Multi-and Cross-Modal Semantics Beyond Vision: Grounding in Auditory Perception. by Kiela et al. could be discussed/cited.[work-NEU], [CMP-NEU]",work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 869,"This idea is novel and interesting.[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 870,"- The learning of such soft combination is done jointly while learning the tasks and is not set manually cf. setting permutations of a fixed number of layer per task.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 871,"- The empirical evaluation is done on intuitively related, superficially unrelated, and a real world task.[empirical evaluation-NEG], [EMP-NEG]",empirical evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 872,"The first three results are on small datasets/tasks, O(10) feature dimensions, and number of tasks and O(1000) images; (i) distinguish two MNIST digits, (ii) 10 UCI tasks with feature sizes 4--30 and number of classes 2--10, (iii) 50 different character recognition on Omniglot dataset.[results-NEG, datasets-NEG], [SUB-NEG]",results,datasets,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 873,"The last task is real world -- 40 attribute classification on the CelebA face dataset of 200K images.[task-NEU], [SUB-NEU]",task,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 874,"While the first three tasks are smaller proof of concept, the last task could have been more convincing if near state-of-the-art methods were used.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 875,"The authors use a Resnet-50 which is a smaller and lesser performing model, they do mention that benefits are expected to be complimentary to say larger model, but in general it becomes harder to improve strong models.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 876,"While this does not significantly dilute the message, it would have made it much more convincing if results were given with stronger networks.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 877,"- The results are otherwise convincing and clear improvements are shown with the proposed method.[results-POS, proposed method-POS], [EMP-POS]",results,proposed method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 878,"- The number of layers over which soft ordering was tested was fixed however. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 879,"It would be interesting to see what would the method learn if the number of layers was explicitly set to be large and an identity layer was put as one of the option.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 880,"In that case the soft ordering could actually learn the optimal depth as well, repeating identity layer beyond the option number of layers.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 881,"Overall, the paper presents a novel idea, which is well motivated and clearly presented.[idea-POS], [NOV-POS, PNF-POS]",idea,,,,,,NOV,PNF,,,,POS,,,,,,POS,POS,,, 882,"The empirical validation, while being limited in some aspects, is largely convincing.[empirical validation-POS], [EMP-POS, SUB-NEG]",empirical validation,,,,,,EMP,SUB,,,,POS,,,,,,POS,NEG,,, 886,"However, I don't find the paper of high significance or the proposed method solid for publication at ICLR.[paper-NEG, proposed method-NEG], [APR-NEG]",paper,proposed method,,,,,APR,,,,,NEG,NEG,,,,,NEG,,,, 887,"The paper is based on the cyclical learning rates proposed by Smith (2015, 2017). I don't understand what is offered beyond the original papers.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 888,"The super-convergence occurs under special settings of hyper-parameters for resnet only and therefore I am concerned if it is of general interest for deep learning models.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 889,"Also, the authors do not give a conclusive analysis under what condition it may happen.[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 890,"The explanation of the cause of super-convergence from the perspective of transversing the loss function topology in section 3 is rather illustrative at the best without convincing support of arguments.[explanation-NEU, section-NEU, arguments-NEG], [EMP-NEU]",explanation,section,arguments,,,,EMP,,,,,NEU,NEU,NEG,,,,NEU,,,, 891,"I feel most content of this paper (section 3, 4, 5) is observational results, and there is lack of solid analysis or discussion behind these observations.[observational results-NEU, analysis-NEG, discussion-NEG], [EMP-NEG, SUB-NEG]",observational results,analysis,discussion,,,,EMP,SUB,,,,NEU,NEG,NEG,,,,NEG,NEG,,, 895,"In NAS, the practitioners have to retrain for every new architecture in the search process, but in ENAS this problem is avoided by sharing parameters and using discrete masks.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 896,"In both approaches, reinforcement learning is used to learn a policy that maximizes the expected reward of some validation set metric.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 897,"Since we can encode a neural network as a sequence, the policy can be parameterized as an RNN where every step of the sequence corresponds to an architectural choice.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 898,"In their experiments, ENAS achieves test set metrics that are almost as good as NAS, yet require significantly less computational resources and time.[experiments-POS], [CMP-POS]",experiments,,,,,,CMP,,,,,POS,,,,,,POS,,,, 900,"Initially it seems like the controller can choose any of B operations in a fixed number of layers along with choosing to turn on or off ay pair of skip connections.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 901,"However, in practice we see that the search space for modeling both skip connections and choosing convolutional sizes is too large, so the authors use only one restriction to reduce the size of the state space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 902,"This is a limitation, as the model space is not as flexible as one would desire in a discovery task.[limitation-NEU], [EMP-NEU]",limitation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 903,"Moreover, their best results (and those they choose to report in the abstract) are due to fixing 4 parallel branches at every layer combined with a 1 x 1 convolution, and using ENAS to learn the skip connections.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 904,"Thus, they are essentially learning the skip connections while using a human-selected model.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 905,"ENAS for RNNs is similar: while NAS searches for a new architecture, the authors use a recurrent highway network for each cell and use ENAS to find the skip connections.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 906,"Thus, it seems like the term Efficient Neural Architecture Search promises too much since in both tasks they are essentially only using the controller to find skip connections.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 907,"Although finding an appropriate architecture for skip connections is an important task, finding an efficient method to structure RNN cells seems like a significantly more important goal.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 908,"Overall, the paper is well-written, and it brings up an important idea: that parameter sharing is important for discovery tasks so we can avoid re-training for every new architecture in the search process.[paper-POS, idea-POS], [CLA-POS, IMP-POS]",paper,idea,,,,,CLA,IMP,,,,POS,POS,,,,,POS,POS,,, 909,"Moreover, using binary masks to control network path (essentially corresponding to training different models) is a neat idea.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 910,"It is also impressive how much faster their model performs on tasks without sacrificing much performance.[model-NEU, performance-NEU], [EMP-POS]",model,performance,,,,,EMP,,,,,NEU,NEU,,,,,POS,,,, 911,"The main limitation is that the best architectures as currently described are less about discovery and more about human input;[limitation-NEU], [EMP-NEU]",limitation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 912,"-- finding a more efficient search path would be an important next step.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 916,"It proposes itself as an improvement over the main recent development of the field, namely Elastic Weight Consolidation.[improvement-NEU], [CMP-NEU]",improvement,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 918,"Follows a section of experiments on variants of MNIST commonly used for continual learning. [experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 919,"Continual learning in neural networks is a hot topic, and this article contributes a very interesting idea.[article-POS, idea-POS], [NOV-POS]",article,idea,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 920,"The notion of conceptors is appealing in this particular use for its interpretation in terms of regularizer and in terms of Boolean logic.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 921,"The numeric examples, although quite toy, provide a clear illustration.[examples-POS, illustration-POS], [PNF-POS]",examples,illustration,,,,,PNF,,,,,POS,POS,,,,,POS,,,, 922,"A few things are still missing to back the strong claims of this paper: * Some considerations of the computational costs: the reliance on the full NxN correlation matrix R makes me fear it might be costly, as it is applied to every layer of the neural networks and hence is the largest number of units in a layer.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 923,"This is of course much lighter than if it were the covariance matrix of all the weights, which would be daunting, but still deserves to be addressed, if only with wall time measures.[measures-NEG], [SUB-NEG]",measures,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 924,"* It could also be welcome to use a more grounded vocabulary, e.g. on p.2 ""Figure 1 shows examples of conceptors computer from three clouds of sample state points coming from a hypothetical 3-neuron recurrent network that was drive with input signals from three difference sources"" could be much more simply said as ""Figure 1 shows the ellipses corresponding to three sets of R^3 points"".[Figure-NEG], [SUB-NEU, EMP-NEU]",Figure,,,,,,SUB,EMP,,,,NEG,,,,,,NEU,NEU,,, 925,"Being less grandiose would make the value of this article nicely on its own. *[article-NEG], [PNF-NEU]",article,,,,,,PNF,,,,,NEG,,,,,,NEU,,,, 926,"Some examples beyond the contrived MNIST toy examples would be welcome.[examples-NEU], [SUB-NEU]",examples,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 927,"For example, the main method this article is compared to (EWC) had a very strong section on Reinforcement learning examples in the Atari framework, not only as an illustration but also as a motivation.[section-POS, method-POS], [EMP-POS]",section,method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 928,"I realise not everyone has the computational or engineering resources to try extensively on multiple benchmarks from classification to reinforcement learning.[benchmarks-NEU], [SUB-NEG]",benchmarks,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 929,"Nevertheless, without going to that extreme, it might be worth adding an extra demo on something bigger than MNIST.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 930,"The authors transparently explain in their answer that they do not (yet!) belong to the deep learning community and hope finding some collaborations to pursue this further.[answer-NEU], [IMP-NEG]",answer,,,,,,IMP,,,,,NEU,,,,,,NEG,,,, 931,"If I may make a suggestion, I think their work would get much stronger impact by doing it the reverse way: first finding the collaboration, then adding this extra empirical results, which then leads to a bigger impact publication.[empirical results-NEU, impact-NEU], [IMP-NEU]",empirical results,impact,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 932,"The later point would normally make me attribute a score of 6: Marginally above acceptance threshold by current DL community standards,[score-NEU, standards-NEU], [IMP-NEU]",score,standards,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 933,"but because there is such a pressing need for methods to tackle this problem, and because this article can generate thinking along new lines about this, I give it a 7 : Good paper, accept. [methods-POS, problem-POS, article-POS, paper-POS], [NOV-POS, IMP-POS, REC-POS]]",methods,problem,article,paper,,,NOV,IMP,REC,,,POS,POS,POS,POS,,,POS,POS,POS,, 935,"The proposed method achieved compression rate 98% in a sentiment analysis task, and compression rate over 94% in machine translation tasks, without a performance loss.[propose method-POS], [EMP-POS]",propose method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 936,"This paper is well-written and easy to follow.[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 937,"The motivation is clear and the idea is simple and effective.[motivation-POS, idea-POS], [IMP-POS]",motivation,idea,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 938,"n It would be better to provide deeper analysis in Subsection 6.1.[analysis-NEU, Subsection-NEU], [SUB-NEU]",analysis,Subsection,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 939,"The current analysis is too simple.[current analysis-NEG], [SUB-NEG]",current analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 940,"It may be interesting to explain the meanings of individual components.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 941,"Does each component is related to a certain topic?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 942,"Is it meaningful to perform ADD or SUBSTRACT on the leaned code?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 943,"It may also be interesting to provide suitable theoretical analysis, e.g., relationships with the SVD of the embedding matrix.[theoretical analysis-NEG], [SUB-NEG]",theoretical analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 947,"GENERAL IMPRESSION: One central problem of the paper is missing novelty.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 948,"The authors are well aware of this. They still manage to provide added value.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 949,"Despite its limited novelty, this is a very interesting and potentially impactful paper.[paper-POS], [NOV-NEU, IMP-POS]",paper,,,,,,NOV,IMP,,,,POS,,,,,,NEU,POS,,, 950,"I like in particular the detailed discussion of related work, which includes some frequently overlooked precursors of modern methods.[related work-POS], [SUB-POS, CMP-POS]",related work,,,,,,SUB,CMP,,,,POS,,,,,,POS,POS,,, 951,"CRITICISM: The experimental evaluation is rather solid, but not perfect. [experimental evaluation-NEU], [EMP-NEU]",experimental evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 952,"It considers three different problems: logistic regression (a convex problem), and dense as well as convolutional networks. That's a solid spectrum.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 953,"However, it is not clear why the method is tested only on a single data set: MNIST.[method-NEG], [SUB-NEG]",method,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 954,"Since it is entirely general, I would rather expect a test on a dozen different data sets.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 955,"That would also tell us more about a possible sensitivity w.r.t. the hyperparameters alpha_0 and beta.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 956,"The extensions in section 5 don't seem to be very useful.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 957,"In particular, I cannot get rid of the impression that section 5.1 exists for the sole purpose of introducing a convergence theorem.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 958,"Analyzing the actual adaptive algorithm would be very interesting.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 959,"In contrast, the present result is trivial and of no interest at all, since it requires knowing a good parameter setting, which defeats a large part of the value of the method.[result-NEG], [EMP-NEG]",result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 960,"MINOR POINTS: page 4, bottom: use citep for Duchi et al. (2011).[page-NEU], [PNF-NEU]",page,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 961,"None of the figures is legible on a grayscale printout of the paper.[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 962,"Please do not use color as the only cue to identify a curve.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 963,"In figure 2, top row, please display the learning rate on a log scale.[figure-NEU], [PNF-NEU]",figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 964,"page 8, line 7 in section 4.3: the the (unintended repetition)[page-NEU, line-NEU, section-NEU], [PNF-NEG]",page,line,section,,,,PNF,,,,,NEU,NEU,NEU,,,,NEG,,,, 965,"End of section 4: an increase from 0.001 to 0.001002 is hardly worth reporting - or am I missing something?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 968,"However, I have the following concerns on novelty.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 969,"1. Although the paper gives some justiification why auto-encoder can work for domain adaptation from perspective of probalistics model, it does not give new formulation or algorithm to handle domain adaptation.[paper-NEG, algorithm-NEG], [NOV-NEG, SUB-NEG]",paper,algorithm,,,,,NOV,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 970,"At this point, the novelty is weaken.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 971,"2. In the introduction, the authors mentioned ""limitations of mSDA is that it needs to explicitly form the covariance matrix of input features and then solves a linear system, which can be computationally expensive in high dimensional settings"".[introduction-NEG], [EMP-NEG]",introduction,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 972,"However, mSDA cannot handle high dimension setting by performing the reconstruction with a number of random non-overlapping sub-sets of input features.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 973,"It is not clear why mSDA cannot handle time-series data but DAuto can.[null], [EMP-NEG, CLA-NEG]",null,,,,,,EMP,CLA,,,,,,,,,,NEG,NEG,,, 974,"DAuto does not consider the sequence/ordering of data either.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 975,"3. If my understanding is not wrong, the proposed DAuto is just a simple combination of three losses (i.e. prediction loss, reconstruction loss, domain difference loss).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 976,"As far as I know, this kind of loss is commonly used in most existing methods.[existing methods-NEG], [CMP-NEG]]",existing methods,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 981,"These solutions achieve zero squared-loss.[solutions-NEU], [EMP-NEU]",solutions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 982,"I would consider result (1) as the main result of this paper, because (2) is a direct consequence of (1).[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 983,"Intuitively, (1) is an easy result.[result-POS], [EMP-POS]",result,,,,,,EMP,,,,,POS,,,,,,POS,,,, 984,"Under the assumptions of Theorem 3.5, it is clear that any tiny random perturbation on the weights will make the output linearly independent.[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 985,"The result will be more interesting if the authors can show that the smallest eigenvalue of the output matrix is relatively large, or at least not exponentially small.[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 986,"Result (3) has severe limitations, because: (a) there can be infinitely many critical point not in S_k that are spurious local minima;[Result-NEG, limitations-NEG], [EMP-NEU]",Result,limitations,,,,,EMP,,,,,NEG,NEG,,,,,NEU,,,, 987,"(b) Even though these spurious local minima have zero Lebesgue measure, the union of their basins of attraction can have substantial Lebesgue measure;[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 988,"(c) inside S_k, Theorem 4.4 doesn't exclude the solutions with exponentially small gradients, but whose loss function values are bounded away above zero.[Theorem-NEU], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 989,"If an optimization algorithm falls onto these solutions, it will be hard to escape.[algorithm-NEU], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 990,"Overall, the paper presents several incremental improvement over existing theories.[improvement-POS], [IMP-POS]",improvement,,,,,,IMP,,,,,POS,,,,,,POS,,,, 991,"However, the novelty and the technical contribution are not sufficient for securing an acceptance. [novelty-NEG, technical contribution-NEG], [NOV-NEG, REC-NEG]",novelty,technical contribution,,,,,NOV,REC,,,,NEG,NEG,,,,,NEG,NEG,,, 1001,"The pictorial explanation for how the CNN can mimic BFS is interesting[explanation-POS], [EMP-POS]",explanation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1003,"For example, what is r?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1004,"And what is the relation of the black/white and orange squares?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1005,"I thought this could use a little more clarity. [clarity-NEU], [CLA-NEG]",clarity,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 1008,"They offer a rigorous analysis into the behavior of optimization in each of these cases, concluding that there is an essential singularity in the cost function around the exact solution, yet learning succumbs to poor optima due to poor initial predictions in training.[analysis-NEU], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1010,"The problem was very well-motivated, and the analysis was sharp and offered interesting insights into the problem of maze solving.[problem-POS, insights-POS], [EMP-POS]",problem,insights,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1011,"What I thought was especially interesting is how their analysis can be extended to other graph problems; while their analysis was specific to the problem of maze solving, they offer an approach -- e.g. that of finding bugs when dealing with graph objects -- that can extend to other problems.[analysis-POS, approach-NEU], [EMP-NEU]",analysis,approach,,,,,EMP,,,,,POS,NEU,,,,,NEU,,,, 1012,"I would be excited to see similar analysis of other toy problems involving graphs.[analysis-NEU], [SUB-NEU, EMP-NEU]",analysis,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 1013,"One complaint I had was inconsistent clarity: while a lot was well-motivated and straightforward to understand, I got lost in some of the details (as an example, the figure on page 4 did not initially make much sense to me).[clarity-NEG, details-NEG], [CLA-NEG, PNF-NEG]",clarity,details,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 1014,"Also, in the experiments, the authors mention multiple attempt with the same settings -- are these experiments differentiated only by their initialization?[experiments-NEG], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 1015,"Finally, there were various typos throughout (one example is eglect minimua on page 2 should be eglect minima).[typos-NEG], [CLA-NEG]",typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1016,"Pros: Rigorous analysis, well motivated problem, generalizable results to deep learning theory[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1024,"CLARITY: The paper is very well written and is easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1025,"However, some implementation details are missing, which makes it difficult to assess the quality of the experimental results.[results-NEG], [SUB-NEG, EMP-NEG]",results,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 1027,"However, I have several concerns about the algorithms proposed in this paper:[algorithms-NEG, paper-NEG], [EMP-NEG]",algorithms,paper,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1028,"- First of all, I do not see why using small random subsets of the original tensor would give a desirable factorization.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1029,"Indeed, a CP decomposition of a tensor can not be reconstructed from CP decompositions of its subtensors.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1031,"I would expect some further elaboration of this question in the paper.[paper-NEU], [SUB-NEU]",paper,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1032,"Although similar methods appeared in the tensor literature before, I don't see any theoretical ground for their correctness. - Second, there is a significant difference between the symmetric CP tensor decomposition and the non-negative symmetric CP tensor decomposition.[methods-NEG], [EMP-NEG]",methods,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1033,"In particular, the latter problem is well posed and has good properties (see, e.g., Lim, Comon. Nonengative approximations of nonnegative tensors (2009)).[problem-POS], [CMP-POS, EMP-POS]",problem,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 1034,"However, this is not the case for the former (see, e.g., Comon et al., 2008 as cited in this paper).[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 1035,"Therefore, (a) computing the symmetric and not non-negative symmetric decomposition does not give any good theoretical guarantees (while achieving such guarantees seems to be one of the motivations of this paper)[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1036,"and (b) although the tensor is non-negative, its symmetric factorization is not guaranteed to be non-negative and further elaboration of this issue seem to be important to me.[issue-NEG], [SUB-NEG]",issue,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1038,"This is an important question that has not been addressed in the literature and is clearly a pro of the paper.[question-NEG, paper-NEU], [SUB-NEG]",question,paper,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 1039,"However, it seems to me that this goal is not fully implemented.[goal-NEG], [EMP-NEG]",goal,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1040,"Indeed, (a) I mentioned in the previous paragraph the issues with the symmetric CP decomposition and (b) although the paper is motivated by the recent algorithm proposed by Sharan&Valiant (2017), the algorithms proposed in this paper are not based on this or other known algorithms with theoretical guarantees.[algorithms-NEG, paper-NEG], [CMP-NEG]",algorithms,paper,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 1041,"This is therefore confusing and I would be interested in the author's point of view to this issue.[issue-NEU], [CLA-NEG]",issue,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 1042,"- Further, the proposed joint approach, where the second and third order information are combined requires further analysis.[approach-NEG, analysis-NEG], [SUB-NEG]",approach,analysis,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1043,"Indeed, in the current formulation the objective is completely dominated by the order-3 tensor factor, because it contributes O(d^3) terms to the objective vs O(d^2) terms contributed by the matrix part.[objective-NEG], [EMP-NEG]",objective,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1044,"It would be interesting to see further elaboration of the pros and cons of such problem formulation.[problem formulation-NEG], [SUB-NEG]",problem formulation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1045,"- Minor comment. In the shifted PMI section, the authors mention the parameter alpha and set specific values of this parameter based on experiments.[parameter-NEG], [SUB-NEG]",parameter,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1046,"However, I don't think that enough information is provided, because, given the author's approach, the value of this parameter most probably depends on other parameters, such as the bach size.[approach-NEG, parameters-NEG], [SUB-NEG]",approach,parameters,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1047,"- Finally, although the empirical evaluation is quite extensive and outperforms the state-of the art, I think it would be important to compare the proposed algorithm to other tensor factorization approaches mentioned above.[empirical evaluation-POS, proposed algorithm-NEG, approaches-NEG], [CMP-NEG, EMP-POS]",empirical evaluation,proposed algorithm,approaches,,,,CMP,EMP,,,,POS,NEG,NEG,,,,NEG,POS,,, 1048,"ORIGINALITY: The idea of using a pointwise mutual information tensor for word embeddings is not new, but the authors fairly cite all the relevant literature.[literature-POS], [SUB-POS]",literature,,,,,,SUB,,,,,POS,,,,,,POS,,,, 1049,"My understanding is that the main novelty is the proposed tensor factorization algorithm and extensive experimental evaluation.[algorithm-POS, experimental evaluation-POS], [NOV-POS]",algorithm,experimental evaluation,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 1050,"However, such batch approaches for tensor factorization are not new and I am quite skeptical about their correctness (see above).[approaches-NEU], [EMP-NEU]",approaches,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1051,"The experimental evaluation presents indeed interesting results.[evaluation-POS, results-POS], [EMP-POS]",evaluation,results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1052,"However, I think it would also be important to compare to other tensor factorization approaches.[approaches-NEU], [CMP-NEU]",approaches,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1053,"I would also be quite interested to see the performance of the proposed algorithm for different values of parameters (such as the butch size).[proposed algorithm-NEU], [EMP-NEU]",proposed algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1054,"SIGNIFICANCE: I think the paper addresses very interesting problem and significant amount of work is done towards the evaluation, but there are some further important questions that should be answered before the paper can be published.[problem-POS, paper-NEU], [EMP-POS]",problem,paper,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 1055,"To summarize, the following are the pros of the paper: - clarity and good presentation;[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 1056,"- good overview of the related literature;[related literature-POS], [CLA-POS]",related literature,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1057,"- extensive experimental comparison and good experimental results.[experimental comparison-POS, experimental results-POS], [EMP-POS]",experimental comparison,experimental results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1058,"While the following are the cons: - the mentioned issues with the proposed algorithm, which in particular does not have any theoretical guarantees;[proposed algorithm-NEG], [EMP-NEG]",proposed algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1059,"- lack of details on how experimental results were obtained, in particular, lack of the details on the values of the free parameters in the proposed algorithm;[experimental results-NEG, proposed algorithm-NEG], [SUB-NEG]",experimental results,proposed algorithm,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1060,"- lack of comparison to other tensor approaches to the word embedding problem (i.e. other algorithms for the tensor decomposition subproblem);[approaches-NEG], [SUB-NEG, CMP-NEG]",approaches,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 1061,"- the novelty of the approach is somewhat limited,[approach-NEG], [NOV-NEG]",approach,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 1062,"although the idea of the extensive experimental comparison is good.[experimental comparison-POS], [EMP-POS]]",experimental comparison,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1066,"Main issues: 1. Aggregating neural network weights to identify feature interactions is very interesting.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1067,"However, completely ignoring activation functions makes the method quite crude.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1068,"2. High-order interacting features must share some common hidden unit somewhere in a hidden layer within a deep neural network.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1069,"Restricting to the first hidden layer in Algorithm 1 inevitably misses some important feature interactions.[Algorithm-NEG], [EMP-NEG]",Algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1072,"4. The experiments are only conducted on some synthetic datasets with very small feature dimensionality p. Large-scale experiments are needed.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1073,"5. There are some important references missing.[references-NEG], [CMP-NEG, SUB-NEG]",references,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 1074,"For example, RuleFit is a good baseline method for identifying feature interactions based on random forest and l1-logistic regression (Friedman and Popescu, 2005, Predictive learning via rule ensembles); Relaxing strict hierarchical hereditary constraints, high-order l1-logistic regression based on tree-structured feature expansion identifies pairwise and high-order multiplicative feature interactions (Min et al. 2014, Interpretable Sparse High-Order Boltzmann Machines); Without any hereditary constraint, feature interaction matrix factorization with l1 regularization identifies pairwise feature interactions on datasets with high-dimensional features (Purushotham et al. 2014, Factorized Sparse Learning Models with Interpretable High Order Feature Interactions). 6. At least, RuleFit (Random Forest regression for getting rules + l1-regularized regression) should be used as a baseline in the experiments.[baseline-NEU], [SUB-NEU]",baseline,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1075,"Minor issues: Ranking of feature interactions in Algorithm 1 should be explained in more details.[Algorithm-NEU], [SUB-NEU]",Algorithm,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1076,"On page 3: b^{(l)} in R^{p_l}, l should be from 1, .., L. You have b^y.[page-NEU], [PNF-NEG]",page,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 1077,"In summary, the idea of using neural networks for screening pairwise and high-order feature interactions is novel, significant, and interesting.[idea-POS], [NOV-POS, IMP-POS]",idea,,,,,,NOV,IMP,,,,POS,,,,,,POS,POS,,, 1078,"However, I strongly encourage the authors to perform additional experiments with careful experiment design to address some common concerns in the reviews/comments for the acceptance of this paper.[experiments-NEU], [EMP-NEU, REC-NEU]",experiments,,,,,,EMP,REC,,,,NEU,,,,,,NEU,NEU,,, 1079,"The additional experimental results are convincing, so I updated my rating score. [experimental results-POS], [EMP-POS, REC-POS]",experimental results,,,,,,EMP,REC,,,,POS,,,,,,POS,POS,,, 1084,"This idea however is difficult to be applied to deep learning with a large amount of data.[idea-NEG], [EMP-NEG]",idea,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1087,"Experiments show its usefulness [Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1088,"though experiments are limited [experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1093,"In case of deep learning, the convexity is not guaranteed and the resulting solutions do not have necessarily follow Lemma 1.[Lemma-NEU], [EMP-NEU]",Lemma,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1094,"u3000Nonetheless, this type of analysis can be useful under appropriate solutions if non-trivial claims are derived; however, Lemma 1 simply explains basic properties of the min-max solutions and max-min solutions works and does not contain non-tibial claims.[analysis-NEU], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1095,"As long as the analysis is experimental, the state of the art should be considered.[analysis-NEU], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1096,"As long as the reviewer knows, the CW attack gives the most powerful attack and this should be considered for comparison. The results with MNIST and CIFAR-10 are different.[results-NEU], [CMP-NEU]",results,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1097,"In some cases, MNIST is too easy to consider the complex structure of deep architectures.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1099,"The main takeaway from the entire paper is not clear very much.[main takeaway-NEG], [IMP-NEG]",main takeaway,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 1101,"Minor: Definition of g in the beginning of Sec 3.1 seems to be a typo.[Sec-NEU, typo-NEG], [CLA-NEG]",Sec,typo,,,,,CLA,,,,,NEU,NEG,,,,,NEG,,,, 1102,"What is u? This is revealed in the latter sections but should be specified here.[sections-NEU], [PNF-NEU]",sections,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1103,"In Section 3.1, >This is in stark contrast with the near-perfect misclassification of the undefended classifier in Table 1.[Section-NEU, Table-NEU], [PNF-NEU]",Section,Table,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 1104,"The results shown in the table seems to indicate the ""perfect"" misclassification.[results-NEU], [CLA-NEG, PNF-NEG]",results,,,,,,CLA,PNF,,,,NEU,,,,,,NEG,NEG,,, 1105,"Sentence after eq. 15 seems to contain a grammatical error.[eq-NEU, grammatical error-NEG], [CLA-NEG]",eq,grammatical error,,,,,CLA,,,,,NEU,NEG,,,,,NEG,,,, 1106,"The paragraph after eq. 17 is duplicated with a paragraph introduced before.[paragraph-NEG], [PNF-NEG]",paragraph,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1110,"The idea is interesting and to my knowledge novel.[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 1111,"Experiments are carefully designed and presented in details, however assessing the impact of the proposed new objective is not straightforward.[Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1112,"It would have been interesting to compare not only with SFNN but also to a model with the same architecture and same gradient estimator (Raiko et al. 2014) using maximum likelihood.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 1114,"Why is it important to maximise I(X_l, Y) for every layer? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1115,"Does that impact the MI of the final layer and Y?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1118,"How is this achieved in practice?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1120,"Mutual information between the successive layers is decomposed as an entropy plus a conditional entropy term (eq 17).[eq-NEU], [EMP-NEU]",eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1121,"How is the conditional entropy term estimated?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1122,"The entropy term is first bounded by conditioning on the previous layer and then estimated using Monte Carlo sampling with a plug-in estimator. Plug-in estimators are known to be inefficient in high dimensions even using a full dataset unless the number of samples is very large.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1123,"It thus seems challenging to use mini batch MC, how does the mini batch estimation compare to an estimation using the full dataset? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1124,"What is the variance of the mini batch estimate?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1125,"In the related work section, the IB problem can also be solved efficiently for meta-Gaussian distribution as explained in Rey et al. 2012 (Meta-gaussian information bottleneck).[related work section-NEU], [CMP-NEU]",related work section,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1126,"There is a small typo in (eq 5). [typo-NEG], [CLA-NEG]",typo,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1129,"The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed.[main finding-NEU, method-NEU], [EMP-POS]",main finding,method,,,,,EMP,,,,,NEU,NEU,,,,,POS,,,, 1130,"This contribution is not bad for an empirical paper.[contribution-NEU], [EMP-POS]",contribution,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 1131,"There's certainly not that much here that's groundbreaking methodologically, though it's certainly nice to know that a simple and scalable method works.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1132,"There's not much detail about the data (it is after all an industrial paper).[data-NEG], [SUB-NEG]",data,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1133,"It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration.[proposed method-NEU, benchmark datasets-NEU, improvement-NEU, model-NEU], [EMP-NEU]",proposed method,benchmark datasets,improvement,model,,,EMP,,,,,NEU,NEU,NEU,NEU,,,NEU,,,, 1134,"As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1135,"Other than that, it's nice to see an evaluation on real production data, and it's nice that the authors have provided enough info that the method should be (more or less) reproducible.[evaluation-POS, info-POS, method-POS], [EMP-POS]",evaluation,info,method,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 1136,"There's some slight concern that maybe this paper would be better for the industry track of some conference, given that it's focused on an empirical evaluation rather than really making much of a methodological contribution.[empirical evaluation-NEU, methodological contribution-NEU], [APR-NEU, IMP-NEU]",empirical evaluation,methodological contribution,,,,,APR,IMP,,,,NEU,NEU,,,,,NEU,NEU,,, 1137,"Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks.[benchmarks-NEU], [EMP-NEU]",benchmarks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1142,"This organization of the action space, together with a smart reward design achieves impressive compression results, given that this approach automates tedious architecture selection.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1143,"The reward design favors low compression/high accuracy over high compression/low performance while the reward still monotonically increases with both compression and accuracy.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1144,"As a bonus, the authors also demonstrate how to include hard constraints such as parameter count limitations into the reward model and show that policies trained on small teachers generalize to larger teacher models.[constraints-POS, model-NEU], [EMP-POS]",constraints,model,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 1145,"Review: The manuscript describes the proposed algorithm in great detail and the description is easy to follow.[manuscript-POS, description-POS], [PNF-POS]",manuscript,description,,,,,PNF,,,,,POS,POS,,,,,POS,,,, 1146,"The experimental analysis of the approach is very convincing and confirms the author's claims.[experimental analysis-POS], [EMP-POS]",experimental analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1147,"Using the teacher network as starting point for the architecture search is a good choice, as initialization strategies are a critical component in knowledge distillation.[strategies-NEU], [EMP-POS]",strategies,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 1148,"I am looking forward to seeing work on the research goals outlined in the Future Directions section.[research goals-NEU], [IMP-POS]",research goals,,,,,,IMP,,,,,NEU,,,,,,POS,,,, 1149,"A few questions/comments: 1) I understand that L_{1,2} in Algorithm 1 correspond to the number of layers in the network, but what do N_{1,2} correspond to?[Algorithm-NEU], [EMP-NEU]",Algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1150,"Are these multiple rollouts of the policies?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1151,"If so, shouldn't the parameter update theta_{{shrink,remove},i} be outside the loop over N and apply the average over rollouts according to Equation (2)?[Equation-NEU], [EMP-NEU]",Equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1153,"2) Minor: some of the citations are a bit awkward, e.g. on page 7: ""algorithm from Williams Williams (1992).[citations-NEG, page-NEU], [CMP-NEG]",citations,page,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 1154,"I would use the citet command from natbib for such citations and citep for parenthesized citations, e.g. ""... incorporate dark knowledge (Hinton et al., 2015)"" or ""The MNIST (LeCun et al., 1998) dataset...""[citations-NEU], [CMP-NEU, PNF-NEU]",citations,,,,,,CMP,PNF,,,,NEU,,,,,,NEU,NEU,,, 1155,"3) In Section 4.6 (the transfer learning experiment), it would be interesting to compare the performance measures for different numbers of policy update iterations.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1156,"4) Appendix: Section 8 states ""Below are the results"", but the figure landed on the next page.[Section-NEG], [PNF-NEG]",Section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1157,"I would either try to force the figures to be output at that position (not in or after Section 9) or write Figures X-Y show the results.[figures-NEU], [PNF-NEG]",figures,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 1158,"Also in Section 11, Figure 13 should be referenced with the ref command [Section-NEG, Figure-NEG], [PNF-NEG]",Section,Figure,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 1159,"5) Just to get a rough idea of training time: Could you share how long some of the experiments took with the setup you described (using 4 TitanX GPUs)?[setup-NEU], [EMP-NEU, SUB-NEU]",setup,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 1160,"6) Did you use data augmentation for both teacher and student models in the CIFAR10/100 and Caltech256 experiments?[models-NEU], [EMP-NEU]",models,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1161,"7) What is the threshold you used to decide if the size of the FC layer input yields a degenerate solution?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1162,"Overall, this manuscript is a submission of exceptional quality and if minor details of the experimental setup are added to the manuscript, I would consider giving it the full score.[manuscript-POS, quality-POS, experimental setup-NEU], [APR-POS, REC-NEU]",manuscript,quality,experimental setup,,,,APR,REC,,,,POS,POS,NEU,,,,POS,NEU,,, 1164,"This paper suggests a simple yet effective approach for learning with weak supervision.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1165,"This learning scenario involves two datasets, one with clean data (i.e., labeled by the true function) and one with noisy data, collected using a weak source of supervision.[datasets-NEU], [EMP-NEU]",datasets,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1168,"The suggested method seems to work well on several document classification tasks.[suggested method-POS], [EMP-POS]",suggested method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1169,"Overall, I liked the paper.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1170,"I would like the authors to consider the following questions - - Over the last 10 years or so, many different frameworks for learning with weak supervision were suggested (e.g., indirect supervision, distant supervision, response-based, constraint-based, to name a few).[frameworks-NEU], [CMP-NEU]",frameworks,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1171,"First, I'd suggest acknowledging these works and discussing the differences to your work.[works-NEG], [CMP-NEG, SUB-NEG]",works,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 1172,"Second - Is your approach applicable to these frameworks?[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1173,"It would be an interesting to compare to one of those methods (e.g., distant supervision for relation extraction using a knowledge base), and see if by incorporating fidelity score, results improve.[methods-NEU, results-NEG], [CMP-NEU, EMP-NEG]",methods,results,,,,,CMP,EMP,,,,NEU,NEG,,,,,NEU,NEG,,, 1174,"- Can this approach be applied to semi-supervised learning?[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1175,"Is there a reason to assume the fidelity scores computed by the teacher would not improve the student in a self-training framework?[reason-NEG, scores-NEU], [CLA-NEG]",reason,scores,,,,,CLA,,,,,NEG,NEU,,,,,NEG,,,, 1176,"- The paper emphasizes that the teacher uses the student's initial representation, when trained over the clean data.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1177,"Is it clear that this step in needed?[step-NEG], [CLA-NEG]",step,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1178,"Can you add an additional variant of your framework when the fidelity score are computed by the teacher when trained from scratch?using different architecture than the student?[variant-NEG, framework-NEG], [SUB-NEG]",variant,framework,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1179,"- I went over the authors comments and I appreciate their efforts to help clarify the issues raised.[comments-POS], [CLA-POS]]",comments,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1181,"The authors survey different methods from the literature, propose a novel one, and evaluate them on a set of benchmarks.[literature-NEU, benchmarks-NEU], [CMP-NEU, NOV-NEU]",literature,benchmarks,,,,,CMP,NOV,,,,NEU,NEU,,,,,NEU,NEU,,, 1182,"A major drawback of the evaluation of the different approaches is that everything was used with its default parameters.[evaluation-NEG], [EMP-NEG]",evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1183,"It is very unlikely that these defaults are optimal across the different benchmarks.[benchmarks-NEU], [EMP-NEG]",benchmarks,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1184,"To get a better impression of what approaches perform well, their parameters should be tuned to the particular benchmark.[benchmark-NEU], [EMP-NEU]",benchmark,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1185,"This may significantly change the conclusions drawn from the experiments.[conclusions-NEU, experiments-NEU], [EMP-NEU]",conclusions,experiments,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1186,"Figures 4-7 are hard to interpret and do not convey a clear message.[Figures-NEG], [PNF-NEG]",Figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1187,"There is no clear trend in many of them and a lot of noise. [null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 1188,"It may be better to relate the structure of the network to other measures of the hardness of a problem, e.g. the phase transition.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1189,"Again parameter tuning would potentially change all of these figures significantly, as would e.g. a change in hardware.[figures-NEU], [EMP-NEU]",figures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1190,"Given the kind of general trend the authors seem to want to show here, I feel that a more theoretic measure of problem hardness would be more appropriate here.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1191,"The authors say of the proposed TwinStream dataset that it may not be representative of real use-cases. It seems odd to propose something that is entirely artificial.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1192,"The description of the empirical setup could be more detailed.[description-NEU, empirical setup-NEU], [SUB-NEU]",description,empirical setup,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 1193,"Are the properties that are being verified different properties, or the same property on different networks?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1194,"The tables look ugly.[tables-NEG], [PNF-NEG]",tables,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1195,"It seems that the header data set should be approach or something similar.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 1196,"In summary, I feel that while there are some issues with the paper, it presents interesting results and can be accepted.[paper-NEU, results-POS], [REC-POS]",paper,results,,,,,REC,,,,,NEU,POS,,,,,POS,,,, 1199,"The model is quite simple and intuitive and the authors demonstrate that it can generate meaningful relationships between pairs of entities that were not observed before.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1200,"While the paper is very well-written[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1202,"1) A stronger motivation for this model is required.[motivation-NEG, model-NEU], [EMP-NEG]",motivation,model,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 1203,"Having a generative model for causal relationships between symptoms and diseases is intriguing yet I am really struggling with the motivation of getting such a model from word co-occurences in a medical corpus.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1204,"I can totally buy the use of the proposed model as means to generate additional training data for a discriminative model used for information extraction but the authors need to do a better job at explaining the downstream applications of their model.[proposed model-NEU], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1205,"2) The word embeddings used seem to be sufficient to capture the knowledge included in the corpus.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1206,"An ablation study of the impact of word embeddings on this model is required.[ablation study-NEU], [SUB-NEG]",ablation study,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 1207,"3) The authors do not describe how the data from xywy.com were annotated.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 1208,"Were they annotated by experts in the medical domain or random users?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1209,"4) The metric of quality is particularly ad-hoc.[quality-NEG], [EMP-NEG]",quality,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1211,"This paper adds an interesting twist on top of recent unpaired image translation work.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 1214,"I think this is a promising direction, but the current paper has unconvincing results, and it's not clear if the method is really solving an important problem yet.[results-POS, method-NEU], [EMP-NEU]",results,method,,,,,EMP,,,,,POS,NEU,,,,,NEU,,,, 1216,"The experiments focus almost entirely on the setting where there actually exist exact matches between the two image sets.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1217,"Even the partial matching experiments in Section 4.1.2 only quantify performance on the images that have exact matches.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1218,"This is a major limitation since the compelling use cases of the method are in scenarios where we do not have exact matches.[limitation-NEG], [EMP-NEG]",limitation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1219,"It feels rather contrived to focus so much on the datasets with exact matches since,;[datasets-NEU], [EMP-NEG]",datasets,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1222,"3) when exact matches exist, simpler methods may be sufficient, such as matching edges.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1223,"There is no comparison to any such simple baselines.[comparison-NEG], [CMP-NEG, SUB-NEG]",comparison,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 1226,"I'd like to see far more results, and some attempt at a metric.[results-NEU], [SUB-NEU]",results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1227,"One option would be to run user studies where humans judge the quality of the matches.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1228,"The results shown in Figure 2 don't convince me, not just because they are qualitative and few, but also because I'm not sure I even agree that the proposed method is producing better results:[results-NEG, proposed method-NEU], [EMP-NEG]",results,proposed method,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 1229,"for example, the DiscoGAN results have some artifacts but capture the texture better in row 3.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1230,"I was also not convinced by the supervised second step in Section 4.3. Given that the first step achieves 97% alignment accuracy, it's no surprised that running an off-the-shelf supervised method on top of this will match the performance of running on 100% correct data.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1231,"In other words, this section does not really add much new information beyond what we could already infer given that the first stage alignment was so successful.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1232,"What I think would be really interesting is if the method can improve performance on datasets that actually do not have ground truth exact matches.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1233,"For example, the shoes and handbags dataset or even better, domain adaptation datasets like sim to real.[dataset-NEU], [SUB-NEU]",dataset,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1234,"I'd like to see more discussion of why the second stage supervised problem is beneficial.[discussion-NEU], [EMP-NEU, SUB-NEU]",discussion,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 1235,"Would it not be sufficient to iterate alpha and T iterations enough times until alpha is one-hot and T is simply training against a supervised objective (Equation 7)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1236,"Minor comments: 1. In the intro, it would be useful to have a clear definition of ""analogy"" for the present context.[intro-NEU], [PNF-NEU]",intro,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1237,"2. Page 2: a link should be provided for the Putin example, as it is not actually in Zhu et al. 2017. 3.[null], [SUB-NEU, PNF-NEU]",null,,,,,,SUB,PNF,,,,,,,,,,NEU,NEU,,, 1238,"Page 3: ""Weakly Supervised Mapping"" u2014 I wouldn't call this weakly supervised. Rather, I'd say it's just another constraint / prior, similar to cycle-consistency, which was referred to under the ""Unsupervised"" section.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1239,"4. Page 4 and throughout: It's hard to follow which variables are being optimized over when.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1240,"For example, in Eqn. 7, it would be clearer to write out the min over optimization variables.[Eqn-NEU], [EMP-NEG]",Eqn,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1242,"6. Page 7: The following sentence is confusing and should be clarified:[Page-NEU], [CLA-NEG]",Page,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 1243,"""This shows that the distribution matching is able to map source images that are semantically similar in the target domain.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 1244,""" 7. Page 7: ""This shows that a good initialization is important for this task.[Page-NEU], [CLA-NEG]",Page,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 1247,"8. In Figure 2, are the outputs the matched training images, or are they outputs of the translation function?[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1248,"9. Throughout the paper, some citations are missing enclosing parentheses.[citations-NEG], [PNF-NEG]",citations,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1254,"The globally optimal solution is related to both the underlying data distribution P and q, and not the same as q.[globally optimal solution-NEU], [EMP-NEG]",globally optimal solution,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1256,"- Both Theorem 1 and Theorem 2 do not directly justify that RAML has similar reward as the Bayes decision rule,[Theorem-NEG], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1257,"Can anything be said about this?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1258,"Are the KL divergence small enough to guarantee similar predictive rewards?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1259,"- In Theorem 2, when does the exponential tail bound assumption hold?[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1260,"- In Table 1, the differences between RAML and SQDML do not seem to support the claim that SQDML is better than RAML.[Table-NEG, claim-NEG], [EMP-NEG]",Table,claim,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1261,"Are the differences actually significant?[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 1262,"Are the differences between SQDML/RAML and ML significant?[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 1263,"In addition, how should tau be chosen in these experiments? [experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1267,"Clarity: - The paper is well written and clarity is good.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1268,"Figure 2 & 3 helps the readers understand the core algorithm.[Figure-POS], [PNF-POS]",Figure,,,,,,PNF,,,,,POS,,,,,,POS,,,, 1269,"Pros: - De-duplication modules of inter and intra object edges are interesting.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1270,"- The proposed method improves the baseline by 5% on counting questions.[proposed method-POS, baseline-POS], [EMP-POS]",proposed method,baseline,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1271,"Cons: - The proposed model is pretty hand-crafted.[proposed model-NEG], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1272,"I would recommend the authors to use something more general, like graph convolutional neural networks (Kipf & Welling, 2017) or graph gated neural networks (Li et al., 2016).[null], [SUB-NEG, EMP-NEG]",null,,,,,,SUB,EMP,,,,,,,,,,NEG,NEG,,, 1273,"- One major bottleneck of the model is that the proposals are not jointly finetuned.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1274,"So if the proposals are missing a single object, this cannot really be counted.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1275,"In short, if the proposals don't have 100% recall, then the model is then trained with a biased loss function which asks it to count all the objects even if some are already missing from the proposals.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1276,"The paper didn't study what is the recall of the proposals and how sensitive the threshold is.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1277,"- The paper doesn't study a simple baseline that just does NMS on the proposal domain.[paper-NEG, baseline-NEG], [EMP-NEG]",paper,baseline,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1278,"- The paper doesn't compare experiment numbers with (Chattopadhyay et al., 2017).[paper-NEG], [SUB-NEG, CMP-NEG]",paper,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 1279,"- The proposed algorithm doesn't handle symmetry breaking when two edges are equally confident (in 4.2.2 it basically scales down both edges).[proposed algorithm-NEG], [SUB-NEG]",proposed algorithm,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1280,"This is similar to a density map approach and the problem is that the model doesn't develop a notion of instance.[approach-NEG, problem-NEG], [CMP-NEG, EMP-NEG]",approach,problem,,,,,CMP,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1281,"- Compared to (Zhou et al., 2017), the proposed model does not improve much on the counting questions.[proposed model-NEG], [CMP-NEG]",proposed model,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1282,"- Since the authors have mentioned in the related work, it would also be more convincing if they show experimental results on CL[related work-NEG, experimental results-NEG], [SUB-NEG]",related work,experimental results,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1283,"Conclusion: - I feel that the motivation is good,[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1284,"but the proposed model is too hand-crafted.[proposed model-NEG], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1285,"Also, key experiments are missing: 1) NMS baseline[experiments-NEG, baseline-NEG], [SUB-NEG]",experiments,baseline,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1286,"2) Comparison with VQA counting work (Chattopadhyay et al., 2017).[Comparison-NEG], [SUB-NEG, CMP-NEG]",Comparison,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 1287,"Therefore I recommend reject.[reject-NEG], [REC-NEG]",reject,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 1288,"References: - Kipf, T.N., Welling, M., Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 1289,"- Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R. Gated Graph Sequence Neural Networks. ICLR 2016.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 1291,"The paper is revised and I saw NMS baseline is added.[paper-POS, baseline-POS], [SUB-POS]",paper,baseline,,,,,SUB,,,,,POS,POS,,,,,POS,,,, 1292,"I understood the reason not to compare with certain related work.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 1293,"The rebuttal is convincing and I decided to increase my rating, because adding the proposed counting module achieve 5% increase in counting accuracy.[rebuttal-POS, rating-POS, accuracy-POS], [REC-POS]",rebuttal,rating,accuracy,,,,REC,,,,,POS,POS,POS,,,,POS,,,, 1294,"However, I am a little worried that the proposed model may be hard to reproduce due to its complexity and therefore choose to give a 6.[proposed model-NEG], [IMP-NEG, REC-NEG]]",proposed model,,,,,,IMP,REC,,,,NEG,,,,,,NEG,NEG,,, 1295,"The authors have addressed my concerns, and clarified a misunderstanding of the baseline that I had, which I appreciate[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1296,". I do think that it is a solid contribution with thorough experiments.[contribution-POS, experiments-POS], [IMP-POS]",contribution,experiments,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 1297,"I still keep my original rating of the paper because the method presented is heavily based on previous works, which limits the novelty of the paper.[rating-NEU, method-NEU, novelty-NEG], [NOV-NEG, REC-NEU]",rating,method,novelty,,,,NOV,REC,,,,NEU,NEU,NEG,,,,NEG,NEU,,, 1300,"It shows empirically that even though the clipping activation function obtains a larger training error for full-precision model, it maintains the same error when applying quantization, whereas training with quantized ReLu activation function does not work in practice because it is unbounded.[error-NEU], [EMP-NEU]",error,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1301,"The experiments are thorough, and report results on many datasets, showing that PACT can reduce down to 4 bits of quantization of weights and activation with a slight loss in accuracy compared to the full-precision model.[experiments-POS, datasets-POS, accuracy-NEU], [EMP-POS, SUB-POS]",experiments,datasets,accuracy,,,,EMP,SUB,,,,POS,POS,NEU,,,,POS,POS,,, 1302,"Related to that, it seams a bit an over claim to state that the accuracy decrease of quantizing the DNN with PACT in comparison with previous quantization methods is much less because the decrease is smaller or equal than 1%, when competing methods accuracy decrease compared to the full-precision model is more than 1%.[accuracy-NEG], [CMP-NEG]",accuracy,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1303,"Also, it is unfair to compare to the full-precision model using clipping, because ReLu activation function in full-precision is the standard and gives much better results, and this should be the reference accuracy[standard-NEU], [CMP-NEG]",standard,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 1304,". Also, previous methods take as reference the model with ReLu activation function, so it could be that in absolute value the accuracy performance of competing methods is actually higher than when using PACT for quantizing DNN.[previous methods-NEU], [EMP-NEU, CMP-NEG]",previous methods,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEG,,, 1305,"OTHER COMMENTS: - the list of contributions is a bit strange[contributions-NEG], [IMP-NEG]",contributions,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 1306,". It seams that the true contribution is number 1 on the list, which is to introduce the parameter alpha in the activation function that is learned with back-propagation, which reduces the quantization error with respect to using ReLu as activation function.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 1307,"To provide an analysis of why it works and quantitative results, is part of the same contribution I would say.[analysis-NEU, quantitative results-NEU, contribution-NEU], [IMP-NEU, EMP-NEU]",analysis,quantitative results,contribution,,,,IMP,EMP,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 1314,"There is a slight modification that enlarges the class of functions for which the theory is applicable (Lemma 3.3).[Lemma-NEU], [EMP-NEU]",Lemma,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1316,"This is a rather simple idea that is shown to be effective in Figure 3.[idea-NEU, Figure-NEU], [EMP-NEU]",idea,Figure,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1318,"This is an important problem and the paper attempts to tackle it in a computationally efficient way.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1320,"It would be nice to be able to show that one can find corresponding attacks that are not too far away from the proposed score.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1321,"Finally, a minor point: Definition 3.1 has a confusing notation, f is a K-valued vector throughout the paper but it also denotes the number that represents the prediction in Definition 3.1. I believe this is just a typo.[notation-NEG, Definition-NEG, typo-NEG], [CLA-NEG, PNF-NEG]",notation,Definition,typo,,,,CLA,PNF,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 1329,"- Compared to many existing techniques, on 9 tasks[existing techniques-NEU], [EMP-NEU]",existing techniques,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1330,"cons: - no mention of time costs, except that for more samples, S > 1, for taylor approximation, it can be very expensive.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 1331,"- one would expect more information to strictly improve performance, but the results are a bit mixed (perhaps due to convergence to local optima and both actor and critic being learned at same time, or the Gaussian assumptions, etc.).[information-NEG, performance-NEU, results-NEG], [SUB-NEG, EMP-NEG]",information,performance,results,,,,SUB,EMP,,,,NEG,NEU,NEG,,,,NEG,NEG,,, 1332,"- relevance: the work presents a new approach to actor-critique strategy for reinforcement learning, remotely related to 'representation learning' (unless value and policies are deemed a form of representation).[work-NEU, approach-POS], [NOV-POS, EMP-POS]",work,approach,,,,,NOV,EMP,,,,NEU,POS,,,,,POS,POS,,, 1333,"Other comments/questions: - Why does the performance start high on Ant (1000), then goes to 0 (all approaches)?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1334,"- How were the tasks selected?[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1335,"Are they all the continuous control tasks available in open ai? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1338,"I think the paper is clearly written, and has some interesting insights.[paper-POS, insights-POS], [CLA-POS]",paper,insights,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 1340,"The paper is written well and clear.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1341,"The core contribution of the paper is the illustration that: under the assumption of flat, or curved decision boundaries with positive curvature small universal adversarial perturbations exist. [contribution-POS], [IMP-POS]",contribution,,,,,,IMP,,,,,POS,,,,,,POS,,,, 1342,"Pros: the intuition and geometry is rather clearly presented.[intuition-POS], [PNF-POS]",intuition,,,,,,PNF,,,,,POS,,,,,,POS,,,, 1344,"In the experimental section used to validate the main hypothesis that the deep networks have positive curvature decision boundaries, there is no description of how these networks were trained.[experimental section-NEG], [EMP-NEG]",experimental section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1345,"It is not clear why the authors have decided to use out-dated 5-layer LeNet and NiN (Network in network) architectures instead of more recent and much better performing architectures (and less complex than NiN architectures).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1346,"It would be nice to see how the behavior and boundaries look in these cases. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1347,"The conclusion is speculative: Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it is key to suppress this subspace of shared positive directions, which can possibly be done through regularization of the objective function.[analysis-NEU], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1348,"This will be the subject of future works.[future works-POS], [IMP-POS]",future works,,,,,,IMP,,,,,POS,,,,,,POS,,,, 1349,"It is clear that regularization should play a significant role in shaping the decision boundaries.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1350,"Unfortunately, the paper does not provide details at the basic level, which algorithms, architectures, hyper-parameters or regularization terms are used.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1351,"All these factors should play a very significant role in the experimental validation of their hypothesis.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1356,"The authors demonstrate novel approaches for generating real-valued sequences using adversarial training, a train on synthetic, test of real and vice versa method for evaluating GANS, generating synthetic medical time series data, and an empirical privacy analysis.[approaches-POS], [NOV-POS]",approaches,,,,,,NOV,,,,,POS,,,,,,POS,,,, 1357,"Major - the medical use case is not motivating.[use case-NEG], [EMP-NEG]",use case,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1358,"de-identifying the 4 telemetry measures is extremely easy and there is little evidence to show that it is even possible to reidentify individuals using these 4 measures.[measures-NEG], [EMP-NEG]",measures,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1361,"Please add information about how this critical value was generated.[information-NEG], [SUB-NEG]",information,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1362,"Also it would be very useful to say that a physician was consulted and that the critical values were clinically useful.[values-NEU], [EMP-NEU]",values,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1363,"- the changes in performance of TSTR are large enough that I would have difficulty trusting any experiments using the synthetic data.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1365,"- In addition it is unclear whether this synthetic process would actually generate results that are clinically useful.[process-NEG, results-NEG], [CLA-NEG, EMP-NEG]",process,results,,,,,CLA,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1367,"An externally valid measure would strengthen the results.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1372,"The MNIST example is compelling.[example-POS], [EMP-POS]",example,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1373,"However the ICU example has some pitfalls which need to be addressed.[example-NEG], [EMP-NEG]]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1377,"The evaluation is performed on a synthetic dataset and shows improvements over seq2seq baseline approach.[evaluation-POS, baseline approach-POS], [EMP-POS]",evaluation,baseline approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1378,"Overall, this paper tackles an important problem of learning programs from natural language and input-output example specifications.[paper-POS, problem-POS], [EMP-POS]",paper,problem,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1379,"Unlike previous neural program synthesis approaches that consider only one of the specification mechanisms (examples or natural language), this paper considers both of them simultaneously.[approaches-POS, paper-POS], [CMP-POS]",approaches,paper,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 1380,"However, there are several issues both in the approach and the current preliminary evaluation, which unfortunately leads me to a reject score,[issues-NEG, approach-NEG], [EMP-NEG, REC-NEG]",issues,approach,,,,,EMP,REC,,,,NEG,NEG,,,,,NEG,NEG,,, 1381,"but the general idea of combining different specifications is quite promising.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1382,"First, the paper does not compare against a very similar approach of Parisotto et al. Neuro-symbolic Program Synthesis (ICLR 2017) that uses a similar R3NN network for generating the program tree incrementally by decoding one node at a time.[paper-NEG, approach-NEG], [CMP-NEG]",paper,approach,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 1383,"Can the authors comment on the similarity/differences between the approaches?[approaches-NEU], [CMP-NEU]",approaches,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1384,"Would it be possible to empirically evaluate how the R3NN performs on this dataset?[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1385,"Second, it seems that the current model does not use the input-output examples at all for training the model.[model-NEU], [EMP-NEG]",model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1386,"The examples are only used during the search algorithm. Several previous neural program synthesis approaches (DeepCoder (ICLR 2017), RobustFill (ICML 2017)) have shown that encoding the examples can help guide the decoder to perform efficient search.[approaches-NEU], [EMP-NEU]",approaches,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1387,"It would be good to possibly add another encoder network to see if encoding the examples as well help improve the accuracy.[examples-NEU], [EMP-NEU]",examples,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1388,"Similar to the previous point, it would also be good to evaluate the usefulness of encoding the problem statement by comparing the final model against a model in which the encoder only encodes the input-output examples.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1389,"Finally, there is also an issue with the synthetic evaluation dataset.[dataset-NEG], [EMP-NEG]",dataset,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1390,"Since the problem descriptions are generated syntactically using a template based approach, the improvements in accuracy might come directly from learning the training templates instead of learning the desired semantics.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1391,"The paper mentions that it is prohibitively expensive to obtain human-annotated set, but can it be possible to at least obtain a handful of real tasks to evaluate the learnt model?[set-NEG, model-NEG], [EMP-NEG]",set,model,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1392,"There are also some recent datasets such as WikiSQL (https://github.com/salesforce/WikiSQL) that the authors might consider in future.[datasets-NEU], [EMP-NEU]",datasets,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1393,"Questions for the authors: Why was MAX_VISITED only limited to 100?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1394,"What happens when it is set to 10^4 or 10^6?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1395,"The Search algorithm only shows an accuracy of 0.6% with MAX_VISITED 100.[accuracy-NEG], [EMP-NEG]",accuracy,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1396,"What would the performance be for a simple brute-force algorithm with a timeout of say 10 mins? Table 3 reports an accuracy of 85.8% whereas the text mentions that the best result is 90.1% (page 8)? What all function names are allowed in the DSL (Figure 1)?[Table-NEG, accuracy-NEG], [PNF-NEG]",Table,accuracy,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 1397,"Can you clarify the contributions of the paper in comparison to the R3NN?[contributions-NEU], [CMP-NEU, CLA-NEG]",contributions,,,,,,CMP,CLA,,,,NEU,,,,,,NEU,NEG,,, 1398,"Minor typos: page 2: allows to add constrains --> allows to add constraints[page-NEG], [PNF-NEG]",page,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1399,"page 5: over MAX_VISITED programs has been --> over MAX_VISITED programs have been[page-NEG], [PNF-NEG]]",page,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1403,"This, in my opinion, is a biased and limited starting point, which ignores much of the literature in learning theory.[literature-NEG], [CMP-NEG]",literature,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1405,"I find this of limited usefulness.[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 1406,"First of all, I find the execution poor in the details: (i) Why is omega limited to a scalar?[details-NEG], [SUB-NEG, EMP-NEU]",details,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEU,,, 1407,"Nothing major really depends on that.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1408,"Later the presentation switches to a more general case.[presentation-NEU], [PNF-NEU]",presentation,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1409,"(ii) What is a one-hot label?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1411,"(iii) In which way is a Gaussian prior uncorrelated, if there is just a scalar random variable? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1412,"(iv) How can one maximize a probability density function?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1413,"(v) Why is an incorrect pseudo-set notation used instead of the correct vectorial one?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1414,"(vi) Exponentially large, reasonably prior model etc. is very vague terminology[terminology-NEG], [PNF-NEG]",terminology,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1415,"(vii) No real credit is given for the Laplace approximation presented up to Eq. 10.[Eq-NEG], [EMP-NEG]",Eq,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1417,"Why spend so much time on a step-by-step derivation anyway, as this is all classic and has been carried out many times before (in a cleaner write-up)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1418,"(viii) P denotes the number of model parameters (I guess it should be a small p? hard to decipher)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1419,"(ix) Usually, one should think of the Laplace approximation and the resulting Bayes factors more in terms of a volume of parameters close to the MAP estimate, which is what the matrix determinant expresses, more than any specific direction of curvature.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1421,"I feel the discussion to be too much obsessed by the claims made in Zhang et al 2016 and in no way suprising.[discussion-NEG], [CMP-NEG]",discussion,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1422,"In fact, the toy example is so much of a toy that I am not sure what to make of it.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1423,"Statistics has for decades successfully used criteria for model selection, so what is this example supposed to proof (to whom?).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1426,"There is some experimental evidence presented on how to resolve the tradeoff between too much noise (underfitting) and too little (overfitting).[experimental evidence-NEU], [EMP-NEU]",experimental evidence,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1428,"I see several issues: (i) It seems that you are not doing much with a SDE, as you diredctly jump to the discretized version (and ignore discussions of it's discretization).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1429,"So maybe one should not feature the term SDE so prominently.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1430,"(ii) While it is commonly done, it would be nice to get some insights on why a Gaussian approx. is a good assumption.[insights-NEU], [SUB-NEU]",insights,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1431,"Maybe you can verify this experimentally (as much of the paper consists of experimental findings) (iii) Eq. 13. Maybe you want this form to indicate a direction you want to move towards, by I find adding and subtracting the gradient in itself not a very interesting manner of illustartion.[Eq-NEG], [EMP-NEG]",Eq,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1432,"(iv) I am not sure in whoch way g is measured, but I guess you are determining it by comparing coefficients. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1434,"It seems you are scaling to mini-batrch gradient to be in expectation equal to the full gradient (not normalized by N), e.g. it scales ~N.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1435,"Now, if we think of a mini-batch as being a batched version of single pattern updates, then clearly the effective step length should scale with the batch size, which - because of the batch size normalization with N/B - means epsilon needs to scale with B.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1437,"(vi) The argument why B ~ N is not clear to me.[argument-NEU], [EMP-NEG]",argument,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1438,"Is there one or are just making a conjecture?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1439,"Bottom line: The paper may contribute to the current discussion of the Zhang et al 2016 paper, but I feel it does not make a significant contribution to the state of knowledge in machine learning.[paper-NEU, contribution-NEG], [IMP-NEU]",paper,contribution,,,,,IMP,,,,,NEU,NEG,,,,,NEU,,,, 1444,"Comments: The idea of showing low rank structure which makes it possible to use second-order information without approximations is interesting. [idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1446,"I have some comments and questions as follows. Have you tried to apply this to another architecture of neural networks?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1447,"Do you think whether your approach is able to apply to convolutional neural networks, which are widely used?[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1448,"There is no gain on using CR with Adam as you mention in Discussion part of the paper.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1449,"Do you think that CR with SGD (or with Adagrad and Adadelta) can be better than Adam?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 1450,"If not, why do people should consider this approach, which is more complicated, since Adam is widely used?[approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1451,"The author(s) should do more experiments to various dataset to be more convincing.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1452,"I do like the idea of the paper, but at the current state, it is hard to evaluate the effective of this paper.[idea-NEU], [IMP-NEU]",idea,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 1453,"I hope the author(s) could provide more experiments on different datasets.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1454,"I would suggest to also try SVHN or CIFAR100.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1455,"And if possible, please also consider CNN even if you are not able to provide any theory. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1460,"This work introduces a simple, computationally straightforward approach to exploring by perturbing the parameters (similar to exploration in some evolutionary algorithms) of policies parametrized with deep neural nets.[work-POS], [EMP-POS]",work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1462,"By using layer norm and adaptive noise, they are able to generate robust parameter noise (it is often difficult to estimate the appropriate variance of parameter noise, as its less clear how this relates to the magnitude of variance in the action space).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1463,"This work is well-written and cites previous work appropriately.[work-POS, previous work-POS], [CLA-POS, CMP-POS]",work,previous work,,,,,CLA,CMP,,,,POS,POS,,,,,POS,POS,,, 1465,"The authors provide a significant set of experiments using their method on several different RL algorithms in both continuous and discrete cases, and find it generally improves performance, particularly for sparse rewards.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1468,"It would be helpful if the authors are able to make their paper reproducible by releasing the code on publication.[code-NEU], [IMP-NEU]",code,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 1470,"seems to show DDPG performing much better than the DDPG baseline in this work on half-cheetah.[figure-POS], [EMP-POS, CMP-POS]",figure,,,,,,EMP,CMP,,,,POS,,,,,,POS,POS,,, 1471,"Minor points: - The definition of a stochastic policy (section 2) is unusual (it is defined as an unnormalized distribution).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1472,"Usually it would be defined as $mathcal{S} rightarrow mathcal{P}(mathcal{A})$[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1473,"- This work extends DQN to learn an explicitly parametrized policy (instead of the greedy policy) in order to useful perturb the parameters of this policy.[work-NEU], [IMP-NEU]",work,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 1475,"to construct a target.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1487,"While the paper is comprehensive in their derivations (very similar to original boosting papers and in many cases one to one translation of derivations), it lacks addressing a few fundamental questions: - AdaBoost optimises exponential loss function via functional gradient descent in the space of weak learners.[questions-NEU], [SUB-NEG]",questions,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 1488,"It's not clear what kind of loss function is really being optimised here.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1489,"It feels like it should be the same, but the tweaks applied to fix weights across all samples for a class doesn't make it not clear what is that really gets optimised at the end.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1490,"- While the motivation is that classes have different complexities to learn and hence you might want each base model to focus on different classes, it is not clear why this methods should be better than normal boosting: if a class is more difficult, it's expected that their samples will have higher weights and hence the next base model will focus more on them.[motivation-NEU, base model-NEU, methods-NEU], [EMP-NEG]",motivation,base model,methods,,,,EMP,,,,,NEU,NEU,NEU,,,,NEG,,,, 1491,"And crudely speaking, you can think of a class weight to be the expectation of its sample weights and you will end up in a similar setup.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1492,"- Choice of using large CNNs as base models for boosting isn't appealing in practical terms, such models will give you the ability to have only a few iterations and hence you can't achieve any convergence that often is the target of boosting models with many base learners.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1493,"- Experimentally, paper would benefit with better comparisons and studies: 1) state-of-the-art methods haven't been compared against (e.g. ImageNet experiment compares to 2 years old method)[paper-NEU], [SUB-NEG, CMP-NEG]",paper,,,,,,SUB,CMP,,,,NEU,,,,,,NEG,NEG,,, 1494,"2) comparisons to using normal AdaBoost on more complex methods haven't been studied (other than the MNIST)[comparisons-NEG], [SUB-NEG, CMP-NEG]",comparisons,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 1495,"3) comparison to simply ensembling with random initialisations.[comparison-NEG], [CMP-NEG]",comparison,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1496,"Other comments: - Paper would benefit from writing improvements to make it read better.[improvements-NEU], [CLA-NEU, SUB-NEU]",improvements,,,,,,CLA,SUB,,,,NEU,,,,,,NEU,NEU,,, 1497,"- simply use the weighted error function: I don't think this is correct, AdaBoost loss function is an exponential loss.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1511,"The experiments support the conjecture mentioned above and show that the proposed technique *significantly* improves the detection accuracy compared to 2 other methods across all attacks and datasets (see Table 1).[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1512,"Interestingly, the authors also test whether adversarial attacks can bypass LID-based detection methods by incorporating LID in their design.[authors-POS], [EMP-POS]",authors,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1513,"Preliminary results show that even in this case the proposed method manages to detect adversarial examples most of the time.[Preliminary results-POS], [EMP-POS]",Preliminary results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1514,"In other words, the proposed technique is rather stable and can not be easily exploited.[proposed technique-POS], [EMP-POS]",proposed technique,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1516,"All the statements are very clear, the structure is transparent and easy to follow.[statements-POS, structure-POS], [CLA-POS, PNF-POS]",statements,structure,,,,,CLA,PNF,,,,POS,POS,,,,,POS,POS,,, 1517,"The writing is excellent.[null], [CLA-POS]",null,,,,,,CLA,,,,,,,,,,,POS,,,, 1518,"I found only one typo (page 8, We also NOTE that...),[typo-NEG], [CLA-NEG]",typo,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1519,"otherwise I don't actually have any comments on the text.[text-NEU], [CLA-NEU]",text,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 1521,"However, it seems that it is indeed novel and given rather convincing empirical justifications, I would recommend to accept the paper. [novel-POS, empirical justifications-POS, paper-POS], [NOV-POS, REC-POS]",novel,empirical justifications,paper,,,,NOV,REC,,,,POS,POS,POS,,,,POS,POS,,, 1523,"The paper is well-written but is lacking detailed information in some areas (see list of questions).[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1524,"The approach of incorporating all the different facts around an entity is worthwhile but pretty straight-forward.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1525,"The evaluation part of this paper is hard to assess due to the unavailability of the 2 datasets and appropriate baselines.[evaluation-NEU, datasets-NEG, baselines-NEG], [SUB-NEG]",evaluation,datasets,baselines,,,,SUB,,,,,NEU,NEG,NEG,,,,NEG,,,, 1526,"Therefore, I am currently leaning towards rejecting this paper.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 1529,"Or is it joint learning and you learn all LSTMs and CNNs yourself? (Besides the reuse of VGG, I could not find this information explicitly stated within the paper.).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1531,"How do you deal with words (or even the whole string) for which you have no word embedding?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1532,"? p.6: Do you have one model for all the relations or does every relation has its own LSTM, CNN, feed-forward network?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1533,"I.e. 1 or 3 feed-forward networks for age, zip code, and release dates?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1534,"? p.6: How does ""Ratings Only"" work as DistMult gets no information of the specific entities?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1536,"? p.7: What does ""find the mid-point of the bin"" mean and should it not be 1018 instead of 1000 bins?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1537,"+ Insights on how different modalities affect the prediction results.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1538,"+ The approach is capable of theoretically handling all linked information to an entity as additional information to the link structure[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1539,". - As the evaluation data is not available, it is really hard to assess the quality of the models.[evaluation data-NEG, models-NEU], [EMP-NEU]",evaluation data,models,,,,,EMP,,,,,NEG,NEU,,,,,NEU,,,, 1541,"+ simple concatenation of an image vector is provided.[baseline-NEU], [EMP-NEU]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1542,"- Training of CNNs, LSTMs and so on is not clear.(See question regarding whether the models are pre-trained or whether the models are also directly learned from the data.).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1543,"Further comments: * In Figure 1, the feed-forward network looks like an encoder-decoder network and it does not show the projection from r to R^d which is mentioned in the text.[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1550,"- Pros of this work The paper provides a specific method to efficiently compute the covariance matrix of the equivalent GP and shows experimentally on CIFAR and MNIST the benefits of using the this GP as opposed to a finite-width non-Bayesian NN.[paper-POS, method-POS], [EMP-POS]",paper,method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1551,"The provided phase analysis and its relation to the depth of the network is also very interesting.[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1552,"Both are useful contributions as long as deep wide Bayesian NNs are concerned.[contributions-POS], [EMP-POS]",contributions,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1553,"A different question is whether that regime is actually useful.[regime-NEU], [EMP-NEU]",regime,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1554,"- Cons of this work Although this work introduces a new GP covariance function inspired by deep wide NNs, I am unconvinced of the usefulness of this regime for the cases in which deep learning is useful.[work-NEU, regime-NEG], [EMP-NEG]",work,regime,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 1555,"For instance, looking at the experiments, we can see that on MNIST-50k (the one with most data, and therefore, the one that best informs about the true underlying NN structure) the inferred depth is 1 for the GP and 2 for the NN, i.e., not deep.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1556,"Similarly for CIFAR, where only up to depth 3 is used.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1557,"None of these results beat state-of-the-art deep NNs.[results-NEG], [CMP-NEG]",results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1558,"Also, the results about the phase structure show how increased depth makes the parameter regime in which these networks work more and more constrained.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1561,"My impression is that the present line of work will not be relevant for deep learning and will not beat state-of-the-art results because of the lack of a structured prior.[work-NEG, results-NEG], [IMP-NEG, CMP-NEG]",work,results,,,,,IMP,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1562,"In that sense, to me this work is more of a negative result informing that to be successful, deep Bayesian NNs should not be wide and should have more structure to avoid reaching the GP regime.[result-NEG], [EMP-NEG]",result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1563,"- Other comments: In Fig. 5, use a consistent naming for the axes (bias and variances).[Fig-NEG], [PNF-NEG]",Fig,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1565,"Does the unit norm normalization used to construct the covariance disallow ARD input selection?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1570,"The analysis answers: 1) When empirical gradients are close to true gradients[analysis-NEU], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1571,"n2) When empirical isolated saddle points are close to true isolated saddle points[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1572,"3) When the empirical risk is close to the true risk.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1573,"The answers are all of the form that if the number of training examples exceeds a quantity that grows with the number of layers, width and the exponential of the norm of the weights with respect to depth, then empirical quantities will be close to true quantities.[answers-NEU], [EMP-NEU]",answers,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1574,"I have not verified the proofs in this paper (given short notice to review) but the scaling laws in the upper bounds found seem reasonably correct.[proofs-POS], [EMP-POS]",proofs,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1575,"Another reviewer's worry about why depth plays a role in the convergence of empirical to true values in deep linear networks is a reasonable worry, but I suspect that depth will necessarily play a role even in deep linear nets because the backpropagation of gradients in linear nets can still lead to exponential propagation of errors between empirical and true quantities due to finite training data.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1576,"Moreover the loss surface of deep linear networks depends on depth even though the expressive capacity does not.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1577,"An analysis of dynamics on this loss surface was presented in Saxe et. al. ICLR 2014 which could be cited to address that reviewer's concern.[analysis-NEU], [CMP-NEU]",analysis,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1579,"Overall, I believe this paper is a nice contribution to the deep learning theory literature.[contribution-POS, literature-NEU], [IMP-POS]",contribution,literature,,,,,IMP,,,,,POS,NEU,,,,,POS,,,, 1580,"However, it would even better to help the reader with more intuitive statements about the implications of their results for practice, and the gap between their upper bounds and practice, especially given the intense interest in the generalization error problem.[results-NEU], [SUB-NEU]",results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1581,"Because their upper bounds look similar to those based on Rademacher complexity or VC dimension (although they claim theirs are a little tighter) - they should put numbers in to their upper bounds taken from trained neural networks, and see what the numerical evaluation of their upper bounds turn out to be in situations of practical interest where deep networks show good generalization performance despite having significantly less training data than number of parameters.[numerical evaluation-NEU, performance-POS], [SUB-NEU, CMP-NEU]",numerical evaluation,performance,,,,,SUB,CMP,,,,NEU,POS,,,,,NEU,NEU,,, 1582,"I suspect their upper bounds will be loose,[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1583,"but still - it would be an excellent contribution to the literature to quantitatively compare theory and practice with bounds that are claimed to be slightly tigher than previous bounds.[contribution-NEU, literature-NEU], [SUB-NEU, IMP-NEU]",contribution,literature,,,,,SUB,IMP,,,,NEU,NEU,,,,,NEU,NEU,,, 1585,"The paper discusses the problem of optimizing neural networks with hard threshold and proposes a novel solution to it.[paper-POS, problem-NEU, solution-POS], [NOV-POS]",paper,problem,solution,,,,NOV,,,,,POS,NEU,POS,,,,POS,,,, 1586,"The problem is of significance because in many applications one requires deep networks which uses reduced computation and limited energy.[problem-POS, significance-POS], [IMP-POS]",problem,significance,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 1587,"The authors frame the problem of optimizing such networks to fit the training data as a convex combinatorial problems.[problem-NEU, training data-NEU], [EMP-NEU]",problem,training data,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1588,"However since the complexity of such a problem is exponential, the authors propose a collection of heuristics/approximations to solve the problem.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1589,"These include, a heuristic for setting the targets at each layer, using a soft hinge loss, mini-batch training and such.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1590,"Using these modifications the authors propose an algorithm (Algorithm 2 in appendix) to train such models efficiently.[algorithm-POS], [EMP-POS]",algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1591,"They compare the performance of a bunch of models trained by their algorithm against the ones trained using straight-through-estimator (SSTE) on a couple of datasets, namely, CIFAR-10 and ImageNet.[performance-NEU, models-NEU], [EMP-NEU]",performance,models,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1593,"I thought the paper is very well written and provides a really nice exposition of the problem of training deep networks with hard thresholds.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1594,"The authors formulation of the problem as one of combinatorial optimization and proposing Algorithm 1 is also quite interesting.[problem-POS, Algorithm-POS], [EMP-POS]",problem,Algorithm,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1595,"The results are moderately convincing in favor of the proposed approach.[results-POS, proposed approach-POS], [EMP-POS]",results,proposed approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1596,"Though a disclaimer here is that I'm not 100% sure that SSTE is the state of the art for this problem.[problem-NEU], [CMP-NEG]",problem,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 1597,"Overall i like the originality of the paper and feel that it has a potential of reasonable impact within the research community.[originality-POS, paper-POS, impact-POS], [IMP-POS]",originality,paper,impact,,,,IMP,,,,,POS,POS,POS,,,,POS,,,, 1599,"- The authors start of by posing the problem as a clean combinatorial optimization problem and propose Algorithm 1.[Algorithm-POS], [EMP-POS]",Algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1600,"Realizing the limitations of the proposed algorithm, given the assumptions under which it was conceived in, the authors relax those assumptions in the couple of paragraphs before section 3.1 and pretty much throw away all the nice guarantees, such as checks for feasibility, discussed earlier.[limitations-NEU, assumptions-NEU, section-NEU], [EMP-NEG]",limitations,assumptions,section,,,,EMP,,,,,NEU,NEU,NEU,,,,NEG,,,, 1601,"- The result of this is another algorithm (I guess the main result of the paper), which is strangely presented in the appendix as opposed to the main text, which has no such guarantees.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1602,"- There is no theoretical proof that the heuristic for setting the target is a good one, other than a rough intuition[theoretical proof-NEG], [EMP-NEG]",theoretical proof,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1603,"- The authors do not discuss at all the impact on generalization ability of the model trained using the proposed approach.[discuss-NEG, model-NEU, proposed approach-NEU], [SUB-NEG]",discuss,model,proposed approach,,,,SUB,,,,,NEG,NEU,NEU,,,,NEG,,,, 1604,"The entire discussion revolves around fitting the training set and somehow magically everything seem to generalize and not overfit. [discussion-NEG], [EMP-NEG]",discussion,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1606,"The paper is incomplete and nowhere near finished, it should have been withdrawn[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 1607,". The theoretical results are presented in a bitmap figure and only referred to in the text (not explained),[theoretical results-NEG], [CLA-NEG, SUB-NEG]",theoretical results,,,,,,CLA,SUB,,,,NEG,,,,,,NEG,NEG,,, 1608,"and the results on datasets are not explained either (and pretty bad). A waste of my time.[results-NEG], [CLA-NEG]",results,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1610,"Summary: This paper presents a very interesting perspective on why deep neural networks may generalize well, in spite of their high capacity (Zhang et al, 2017).[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1612,"It first shows that a simple weakly regularized (linear) logistic regression model over 200 dimensional data can perfectly memorize a random training set with 200 points, while also generalizing well when the class labels are not random (eg, when a simple linear model explains the class labels); this provides a much simpler example of a model generalizing well in spite of high capacity, relative to the experiments presented by Zhang et al (2017). [model-POS], [CMP-POS, EMP-POS]",model,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 1613,"It shows that in this very simple setting, the evidence of a model correlates well with the test accuracy, and thus could explain this phenomena (evidence is low for model trained on random data, but high for model trained on real data).[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1620,"These scaling rules are confirmed experimentally (DNN trained on MNIST).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1621,"Thus, this Bayesian perspective can also help explain the observation that models trained with smaller batch sizes (noisier gradient estimates) often generalize better than those with larger batch sizes (Kesker et al, 2016).[observation-POS], [EMP-POS]",observation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1623,"Review: Quality: The quality of the work is high.[work-POS], [CLA-POS]",work,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1624,"Experiments and analysis are both presented clearly.[Experiments-POS, analysis-POS], [PNF-POS, EMP-POS]",Experiments,analysis,,,,,PNF,EMP,,,,POS,POS,,,,,POS,POS,,, 1625,"Clarity: The paper is relatively clear,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1626,"though some of the connections between the different parts of the paper felt unclear to me:[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1627,"1) It would be nice if the paper were to explain, from a theoretical perspective, why large evidence should correspond to better generalization, or provide an overview of the work which has shown this (eg, Rissanen, 1983).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 1628,"2) Could margin-based generalization bounds explain the superior generalization performance of the linear model trained on random vs. non-random data?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1630,"3) The connection between the work on Bayesian evidence, and the work on SGD, felt very informal.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 1631,"The link seems to be purely intuitive (SGD should converge to minima with high evidence, because its updates are noisy).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1632,"Can this be formalized?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1633,"There is a footnote on page 7 regarding Bayesian posterior sampling -- I think this should be brought into the body of the paper, and explained in more detail.[footnote-NEU, page-NEU, body-NEU], [SUB-NEU, PNF-NEU]",footnote,page,body,,,,SUB,PNF,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 1634,"4) The paper does not give any background on stochastic differential equations, and why there should be an optimal noise scale 'g', which remains constant during the stochastic process, for converging to a minima with high evidene.[background-NEG], [SUB-NEG]",background,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1635,"Are there any theoretical results which can be leveraged from the stochastic processes literature?[theoretical results-NEU], [EMP-NEU]",theoretical results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1636,"For example, are there results which prove anything regarding the convergence of a stochastic process under different amounts of noise?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1637,"5) It was unclear to me why momentum was used in the MNIST experiments.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1638,"This seems to complicate the experimental setting.[experimental setting-NEG], [EMP-NEG]",experimental setting,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1639,"Does the generalization gap not appear when no momentum is used?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1640,"Also, why is the same learning rate used for both small and large batch training for Figures 3 and 4?[Figures-NEU], [EMP-NEU]",Figures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1641,"If the learning rate were optimized together with batch size (eg, keeping aN/B constant), would the generalization gap still appear?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1642,"Figure 5a seems to suggest that it would not appear (peaks appear to all have the same test accuracy).[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1643,"6) It was unclear to me whether the analysis of SGD as a stochastic differential equation with noise scale aN/((1-m)B) was a contribution of this paper.[analysis-NEU, contribution-NEU], [EMP-NEU]",analysis,contribution,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1644,"It would be good if it were made clearer which part of the mathematical analysis in sections 2 and 5 are original.[analysis-NEU, sections-NEU], [EMP-NEU]",analysis,sections,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1645,"7) Some small feedback: The notation $< x_i > 0$ and $< x_i^2 > 1$ is not explained.[notation-NEG], [EMP-NEG]",notation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1646,"Is each feature being normalized to be zero mean, unit variance, or is each training example being normalized?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1647,"Originality: The works seems to be relatively original combination of ideas from Bayesian evidence, to deep neural network research. [works-NEU], [NOV-NEU]",works,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 1648,"However, I am not familiar enough with the literature on Bayesian evidence, or the literature on sharp/broad minima, and their generalization properties, to be able to confidently say how original this work is.[work-NEU], [NOV-NEU]",work,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 1649,"Significance: I believe that this work is quite significant in two different ways: 1) Bayesian evidence provides a nice way of understanding why neural nets might generalize well, which could lead to further theoretical contributions.[work-POS, theoretical contributions-POS], [IMP-POS]",work,theoretical contributions,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 1650,"2) The scaling rules described in section 5 could help practitioners use much larger batch sizes during training, by simultaneously increasing the learning rate, the training set size, and/or the momentum parameter.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1651,"This could help parallelize neural network training considerably.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1652,"Some things which could limit the significance of the work: 1) The paper does not provide a way of measuring the (approximate) evidence of a model.[paper-NEG], [IMP-NEG]",paper,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 1653,"It simply says it is prohibitively expensive to compute for large models. Can the Gaussian approximation to the evidence (equation 10) be approximated efficiently for large neural networks? 2) The paper does not prove that SGD converges to models of high evidence, or formally relate the noise scale 'g' to the quality of the converged model, or relate the evidence of the model to its generalization performance.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1655,"I think that the paper would be made stronger and clearer if the questions I raised above are addressed prior to publication.[paper-NEU], [REC-NEU]",paper,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 1660,"My first remark regards the presentation of the technique.[presentation-NEU], [PNF-NEU]",presentation,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1662,"I strongly disagree with this statement, not only because the technique deals exactly with augmenting data, but also because it can be used in combination to any learning method (including non-deep learning methodologies).[statement-NEG, technique-NEU], [EMP-NEG]",statement,technique,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 1663,"Naturally, the literature review deals with data augmentation technique, which supports my point of view.[literature review-NEG], [CMP-NEG]",literature review,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1664,"In this regard, I would have expected comparison with other state-of-the-art data augmentation techniques.[comparison-NEG], [CMP-NEG, SUB-NEG]",comparison,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 1665,"The usefulness of the BC technique is proven to a certain extent (see paragraph below) but there is not comparison with state-of-the-art.[comparison-NEG], [CMP-NEG]",comparison,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1666,"In other words, the authors do not compare the proposed method with other methods doing data augmentation.[proposed method-NEG], [CMP-NEG]",proposed method,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1667,"This is crucial to understand the advantages of the BC technique.[advantages-NEU], [EMP-NEU]",advantages,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1669,"Intuitively, the diagram shown in Figure 4 works well for 3 classes in dimension 2.[Figure-POS], [PNF-POS]",Figure,,,,,,PNF,,,,,POS,,,,,,POS,,,, 1670,"If we add another class, no matter how do we define the borders, there will be one pair of classes for which the transition from one to another will pass through the region of a third class.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1671,"The situation worsens with more classes.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1672,"However, this can be solved by adding one dimension, 4 classes and 3 dimensions seems something feasible.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 1673,"One can easily understand that if there is one more class than the number of dimensions, the assumption should be feasible, but beyond it starts to get problematic.[assumption-NEG], [EMP-NEG]",assumption,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1674,"This discussion does not appear at all in the manuscript and it would be an important limitation of the method, specially when dealing with large-scale data sets.[discussion-NEG], [SUB-NEG]",discussion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1675,"Overall I believe the paper is not mature enough for publication.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 1676,"Some minor comments: - 2.1: We introduce --> We discussion - Pieczak 2015a did not propose the extraction of MFCC.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 1677,"- the x_i and t_i of section 3.2.2 should not be denoted with the same letters as in 3.2.1.[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1678,"- The correspondence with a semantic feature space is too pretentious, specially since no experiment in this direction is shown.[experiment-NEG], [SUB-NEG]",experiment,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1679,"- I understand that there is no mixing in the test phase, perhaps it would be useful to recall it.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1682,"The idea is quite straightforward, and the paper is relatively easy to follow.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1683,"The proposed algorithm is validated on several image classification datasets.[proposed algorithm-POS], [SUB-POS]",proposed algorithm,,,,,,SUB,,,,,POS,,,,,,POS,,,, 1684,"The paper is its current form has the following issues: 1. There is hardly any baseline compared in the paper.[paper-NEG, baseline-NEG], [SUB-NEG, CMP-NEG]",paper,baseline,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1685,"The proposed algorithm is essentially an ensemble algorithm, there exist several works on deep model ensemble (e.g., Boosted convolutional neural networks, and Snapshot Ensemble) should be compared against.[proposed algorithm-NEU], [SUB-NEG, CMP-NEG]",proposed algorithm,,,,,,SUB,CMP,,,,NEU,,,,,,NEG,NEG,,, 1686,"2. I did not carefully check all the proofs, but seems most of the proof can be moved to supplementary to keep the paper more concise.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1687,"3. In Eq. (3), tilde{D} is not defined.[Eq-NEU], [CLA-NEG]",Eq,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 1688,"4. Under the assumption $epsilon_t(l) > frac{1}{2lambda}$, the definition of $beta_t$ in Eq.8 does not satisfy $0 < beta_t < 1$.[assumption-NEU, Eq-NEG], [EMP-NEU]",assumption,Eq,,,,,EMP,,,,,NEU,NEG,,,,,NEU,,,, 1689,"5. How many layers is the DenseNet-BC used in this paper?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1690,"Why the error rate reported here is higher than that in the original paper?[error rate-NEG], [EMP-NEG]",error rate,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1691,"Typo: In Session 3 Line 7, there is a missing reference.[Session-NEU, Line-NEU, reference-NEG], [CLA-NEG]",Session,Line,reference,,,,CLA,,,,,NEU,NEU,NEG,,,,NEG,,,, 1692,"In Session 3 Line 10, ""1,00 object classes"" should be ""100 object classes"".[Session-NEU, Line-NEU], [CLA-NEG]",Session,Line,,,,,CLA,,,,,NEU,NEU,,,,,NEG,,,, 1693,"In Line 3 of the paragraph below Equation 5, ""classe"" should be ""class"". [Line-NEU, Equation-NEU], [CLA-NEG]",Line,Equation,,,,,CLA,,,,,NEU,NEU,,,,,NEG,,,, 1697,"The network architecture is suitably described and seems reasonable to learn simultaneously similar games, which are visually distinct.[network architecture-POS], [EMP-POS]",network architecture,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1698,"However, the authors do not explain how this architecture can be used to do the domain adaptation.[architecture-NEU], [SUB-NEG]",architecture,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 1699,"Indeed, if some games have been learnt by the proposed algorithm, the authors do not precise what modules have to be retrained to learn a new game.[proposed algorithm-NEU], [EMP-NEG]",proposed algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1700,"This is a critical issue, because the experiments show that there is no gain in terms of performance to learn a shared embedding manifold (see DA-DRL versus baseline in figure 5).[issue-NEG, experiments-NEG, performance-NEG], [EMP-NEG]",issue,experiments,performance,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 1701,"If there is a gain to learn a shared embedding manifold, which is plausible, this gain should be evaluated between a baseline, that learns separately the games, and an algorithm, that learns incrementally the games.[baseline-NEU], [EMP-NEU]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1702,"Moreover, in the experimental setting, the games are not similar but simply the same.[experimental setting-NEG], [EMP-NEG]",experimental setting,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1703,"My opinion is that this paper is not ready for publication.[paper-NEG], [APR-NEG, REC-NEG]",paper,,,,,,APR,REC,,,,NEG,,,,,,NEG,NEG,,, 1704,"The interesting issues are referred to future works. [issues-NEG], [IMP-NEG]",issues,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 1712,"* The authors do not compare with a lot of the state-of-the-art in outlier detection and the obvious baselines: SVDD/OneClassSVM without PCA, Gaussian Mixture Model, KNFST, Kernel Density Estimation, etc * The model selection using the AUC of inlier accepted fraction is not well motivated in my opinion.[baselines-NEG, model-NEU], [CMP-NEG]",baselines,model,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 1713,"This model selection criterion basically leads too a probability distribution with rather steep borders and indirectly prevents the outlier to be too far away from the positive data.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1714,"The latter is important for the GAN-like training.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 1715,"* The experiments are not sufficient: Especially for multi-class classification tasks, it is easy to sample various experimental setups for outlier detection. This allows for robust performance comparison. [experiments-NEG, performance-NEU], [SUB-NEG, EMP-NEU]",experiments,performance,,,,,SUB,EMP,,,,NEG,NEU,,,,,NEG,NEU,,, 1716,"* With the imbalanced training as described in the paper, it is quite natural that the confidence threshold for the classification decision needs to be adapted (not equal to 0.5)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1717,"* There are quite a few heuristic tricks in the paper and some of them are not well motivated and analyzed (such as the discriminator training from multiple generators)[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1718,"* A cross-entropy loss for the autoencoder does not make much sense in my opinion (?)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1719,"Minor comments: * Citations should be fixed (use citep to enclose them in ())[Citations-NEG], [PNF-NEG]",Citations,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1720,"* The term AI-related task sounds a bit too broad[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 1721,"* The authors could skip the paragraph in the beginning of page 5 on the AUC performance. AUC is a standard choice for evaluation in outlier detection.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1722,"* Where is Table 1?[Table-NEU], [PNF-NEU]",Table,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1723,"* There are quite a lot of typos.[typos-NEG], [CLA-NEG]",typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1724,"*After revision statement* I thank the authors for their revision, but I keep my rating.[null], [REC-NEU]",null,,,,,,REC,,,,,,,,,,,NEU,,,, 1725,"The clarity of the paper has improved;[clarity-POS], [CLA-POS]",clarity,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1726,"but the experimental evaluation is lacking realistic datasets and further simple baselines (as also stated by the other reviewers)[experimental evaluation-NEG, baselines-NEG], [SUB-NEG]",experimental evaluation,baselines,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1731,"The paper is moderately well written and structured.[paper-NEU], [CLA-NEU, PNF-NEU]",paper,,,,,,CLA,PNF,,,,NEU,,,,,,NEU,NEU,,, 1732,"Command of related work is ok,[related work-NEU], [CMP-NEU]",related work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1733,"but some relevant refs are missing (e.g., Kloft and Laskov, JMLR 2012).[refs-NEG], [SUB-NEG, CMP-NEG]",refs,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 1734,"The empirical results actually confirm that indeed the strategy of reducing the dimensionality using random projections reduces the impact from adversarial distortions.[empirical results-POS], [EMP-POS]",empirical results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1736,"What the paper really lacks in my opinion is a closer analysis of *why* the proposed approach works, i.e., a qualitative empirical analysis (toy experiment?) or theoretical justification.[proposed approach-NEG, analysis-NEG], [SUB-NEG, EMP-NEG]",proposed approach,analysis,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1737,"Right now, there is no theoretical justification for the approach, nor even a (in my opinion) convincing movitation/Intuition behind the approach.[theoretical justification-NEG], [SUB-NEG]",theoretical justification,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1738,"Also, the attack model should formally introduced.[model-NEU], [SUB-NEU]",model,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1739,"In summary, I d like to encourage the authors to further investigate into their approach, but I am not convinced by the manuscript in the current form.[approach-NEU], [EMP-NEG, SUB-NEU]",approach,,,,,,EMP,SUB,,,,NEU,,,,,,NEG,NEU,,, 1740,"It lacks both in sound theoretical justification and intuitive motivation of the approach.[theoretical justification-NEG, intuitive motivation-NEG, approach-NEU], [EMP-NEG, SUB-NEG]",theoretical justification,intuitive motivation,approach,,,,EMP,SUB,,,,NEG,NEG,NEU,,,,NEG,NEG,,, 1741,"The experiments, however, show clearly advantages of the approach (again, here further experiments are necessary, e.g., varying the dose of adversarial points). [experiments-POS], [EMP-NEG]",experiments,,,,,,EMP,,,,,POS,,,,,,NEG,,,, 1744,"I think overall it is a good idea.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1745,"But I find the paper lacking a lot of details and to some extend confusing.[paper-NEG, details-NEG], [SUB-NEG]",paper,details,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 1746,"Here are a few comments that I have: Figure 2 is very confusing for me. [Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1747,"Please first of all make the figures much larger.[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1748,"ICLR does not have a strict page limit, and the figures you have are hard to impossible to read.[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1749,"So you train in (a) on the steps task until 350k steps?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1750,"Is (b), (d),(c) in a sequence or is testing moving from plain to different things?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1751,"The plot does not explicitly account for the distillation phase.[plot-NEU], [EMP-NEU]",plot,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1752,"Or at least not in an intuitive way.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1753,"But if the goal is transfer, then actually PLAID is slower than the MultiTasker because it has an additional cost to pay (in frames and times) for the distillation phase right? Or is this counted.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1754,"Going then to Figure 3, I almost fill that the MultiTasker might be used to simulate two separate baselines.[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1755,"Indeed, because the retention of tasks is done by distilling all of them jointly, one baseline is to keep finetuning a model through the 5 stages, and then at the end after collecting the 5 policies you can do a single consolidation step that compresses all.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1756,"So it will be quite important to know if the frequent integration steps of PLAID are helpful (do knowing 1,2 and 3 helps you learn 4 better? Or knowing 3 is enough).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1757,"Where exactly is input injection used?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1758,"Is it experiments from figure 3.[experiments-NEU, figure-NEU], [EMP-NEU]",experiments,figure,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1759,"What input is injecting?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1760,"What do you do when you go back to the task that doesn't have the input, feed 0?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1761,"What happens if 0 has semantics ? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1762,"Please say in the main text that details in terms of architecture and so on are given in the appendix.[details-NEG, architecture-NEU], [SUB-NEG]",details,architecture,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 1763,"And do try to copy a bit more of them in the main text where reasonable.[main text-NEU], [PNF-NEG]",main text,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 1764,"What is the role of PLAID?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1765,"Is it to learn a continual learning solution?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1766,"So if I have 100 tasks, do I need to do 100-way distillation at the end to consolidate all skills?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1767,"Will this be feasible?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1768,"Wouldn't the fact of having data from all the 100 tasks at the end contradict the traditional formulation of continual learning?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1769,"Or is it to obtain a multitask solution while maximizing transfer (where you always have access to all tasks, but you chose to sequentilize them to improve transfer)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1770,"And even then maximize transfer with respect to what?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1771,"Frames required from the environment? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1772,"If that are you reusing the frames you used during training to distill?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1773,"Can we afford to keep all of those frames around?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1774,"If not we have to count the distillation frames as well.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1775,"Also more baselines are needed.[baselines-NEG], [SUB-NEG]",baselines,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1776,"A simple baseline is just finetunning as going from one task to another, and just at the end distill all the policies found through out the way.[baseline-NEU], [EMP-NEU]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1777,"Or at least have a good argument of why this is suboptimal compared to PLAID.[argument-NEU], [CMP-NEU, EMP-NEG]",argument,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEG,,, 1778,"I think the idea of the paper is interesting and I'm willing to increase (and indeed decrease) my score.[idea-POS], [REC-POS]",idea,,,,,,REC,,,,,POS,,,,,,POS,,,, 1779,"But I want to make sure the authors put a bit more effort into cleaning up the paper, making it more clear and easy to read.[paper-NEG], [PNF-NEG, CLA-NEG]",paper,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 1780,"Providing at least one more baseline (if not more considering the other things cited by them). [baseline-NEU], [SUB-NEG]",baseline,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 1786,"In sum, it is an interesting paper with promising results and the proposed methods were carefully evaluated in many setups.[paper-POS, results-POS, proposed methods-POS], [EMP-POS]",paper,results,proposed methods,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 1787,"Some detailed comments are: -tAlthough the use of affect lexica is innovative, the idea of extending the training objective function with lexica information is not new.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 1788,"Almost the same method was proposed in K.A. Nguyen, S. Schulte im Walde, N.T. Vu. Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction. In Proceedings of ACL, 2016.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 1789,"-tAlthough the lexicons for valence, arousal, and dominance provide different information, their combination did not perform best.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1791,"-tIn Figure 2, the authors picked four words to show that valence is helpful to improve Glove word beddings. It is not convincing enough for me.[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1792,"I would like to see to the top k nearest neighbors of each of those words. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1794,"This paper is proposing a new formulation for regularization of Wasserstein Generative Adversarial models (WGAN).[paper-POS], [NOV-POS]",paper,,,,,,NOV,,,,,POS,,,,,,POS,,,, 1796,"This problem is often regularized by adding a gradient penalty, ie a penalty of the form lambda E_{z~tau}}(||grad f (z)||-1)^2 where tau is the distribution of (tx+(1-x)y) where x is drawn according to the empirical measure and y is drawn according to the target measure.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1798,"Overall the paper is too vague on the mathematical part, and the experiments provided are not particularly convincing in assessing the benefit of the new penalty.[paper-NEG, experiments-NEG], [EMP-NEG]",paper,experiments,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1799,"The authors have tried to use mathematical formulations to motivate their choice, but they lack rigorous definitions/developments to make their point convincing.[definitions-NEG, developments-NEG], [EMP-NEG]",definitions,developments,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1800,"They should also present early their model and their mathematical motivation: in what sense is their new penalty preferable?[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1801,"Presentation issues: - in printed black and white versions most figures are meaningless.[Presentation issues-NEG, figures-NEG], [PNF-NEG]",Presentation issues,figures,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 1802,"- red and green should be avoided on the same plots, as colorblind people will not perceived any difference…[plots-NEG], [PNF-NEG]",plots,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1803,"- format for images should be vectorial (eps or pdf), not jpg or png…[images-NEG], [PNF-NEG]",images,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1804,"- legend/sizes are not readable (especially in printed version).[legends-NEG, size-NEG], [PNF-NEG]",legends,size,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 1805,"References issues: - harmonize citations: if you add first name for some authors add them for all of them: why writing Harold W. Kuhn and C. Vilani for instance?[References issues-NEU], [PNF-NEU]",References issues,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1806,"- cramer->Cramer - wasserstein->Wasserstein (2x) - gans-> GANs - Salimans et al. is provided twice, and the second is wrong anyway.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 1807,"Specific comments: page 1: - different more recent contributions -> more recent contributions - avoid double brackets ))[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 1808,"page 2: - Please rewrite the first sentence below Definition 1 in a meaningful way.[sentence-NEU], [PNF-NEU]",sentence,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1809,"- Section 3: if mu is an empirical distribution, it is customary to write it mu_n or hat mu_n (in a way that emphasizes the number of observations available).[Section-NEG], [CLA-NEG, PNF-NEG]",Section,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 1810,"- d is used as a discriminator and then as a distance. This is confusing…[null], [PNF-NEG, EMP-NEG]",null,,,,,,PNF,EMP,,,,,,,,,,NEG,NEG,,, 1811,"page 3: - f that plays the role of an appraiser (or critic)...: this paragraph could be extended and possibly elements of the appendix could be added here.[paragraph-NEU, appendix-NEU], [PNF-NEU]",paragraph,appendix,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 1812,"- Section 4: the way clipping is presented is totally unclear and vague.[Section-NEG], [PNF-NEG]",Section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1813,"This should be improved.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 1814,"- Eq (5): as written the distribution of tilde{x} tx+(1-t)y is meaningless: What is x and y in this context?[Eq-NEU], [EMP-NEU]",Eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1815,"please can you describe the distributions in a more precise way?[distributions-NEU], [EMP-NEU]",distributions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1817,"Please state precise results using mathematical formulation.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1818,"- Observation 1: real and generated data points are not introduced at this stage... data points are not even introduced neither![Observation-NEG], [EMP-NEG]",Observation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1819,"page 5: - the examples are hard to understand. It would be helpful to add the value of pi^* and f^* for both models, and explaining in details how they fit the authors model. - in Figure 2 the left example is useless to me.[examples-NEG, Figure-NEG], [EMP-NEG]",examples,Figure,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1820,"It could be removed to focus more extensively on the continuous case (right example).[example-NEU], [EMP-NEU]",example,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1821,"- the the -> the page 6: - deterministic coupling could be discussed/motivated when introduced.[page-NEU], [EMP-NEU]",page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1822,"Observation 3 states some property of non non-deterministic coupling but the concept itself seems somehow to appear out of the blue.[Observation-NEU], [EMP-NEU]",Observation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1823,"page 10: - Figure 6: this example should be more carefully described in terms of distribution, f*, etc.[page-NEU, Figure-NEU], [PNF-NEU]",page,Figure,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 1824,"page 14: - Proposition 1: the proof could be shorten by simply stating in the proposition that f and g are distribution…[page-NEU, Proposition-NEU], [PNF-NEU]",page,Proposition,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 1825,"page 15: - we wish to compute-> we aim at showing?[page-NEU], [EMP-NEU]",page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1826,"- f_1 is not defined sot the paragraph the latter equation... showing that almost surely x leq y is unclear to me, so is the result then.[paragraph-NEG, equation-NEG], [EMP-NEG]",paragraph,equation,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1827,"It could be also interesting to (geometrically) interpret the coupling proposed.[null], [EMP-NEU, SUB-NEU]",null,,,,,,EMP,SUB,,,,,,,,,,NEU,NEU,,, 1828,"The would help understanding the proof, and possibly reuse the same idea in different context.[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1829,"page 16: - proof of Proposition 2 : key idea here is using the positive and negative part of (f-g). This could simplify the proof.[page-NEU, Proposition-NEU], [EMP-NEU]]",page,Proposition,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1832,"The authors rightly say that one of the skills an autonomous car must have is the ability to change lanes, however this task is not one of the most difficult for autonomous vehicles to achieve and this ability has already been implemented in real vehicles.[task-NEG], [EMP-NEG]",task,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1834,"To make a stronger case for this research being relevant to the real autonomous driving problem, the authors would need to compare their algorithm to a real algorithm and prove that it is more ""data efficient.[algorithm-NEU], [CMP-NEU]",algorithm,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 1835,""" This is a difficult comparison since the sensing strategies employed by real vehicles u2013 LIDAR, computer vision, recorded, labeled real maps are vastly different from the slot car model proposed by the authors.[model-NEG, strategies-NEG], [CMP-NEG]",model,strategies,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 1836,"In term of impact, this is a theoretical paper looking at optimizing a sandbox problem where the results may be one day applicable to the real autonomous driving case.[paper-NEU, results-NEU], [IMP-NEU]",paper,results,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 1839,""" I am not sure what is meant by this since in this paper the authors never test their algorithm on real systems and in real systems it is not possible to completely eliminate collisions.[paper-NEG, algorithm-NEG], [CLA-NEG, EMP-NEG]",paper,algorithm,,,,,CLA,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1842,"This choice makes their algorithm not currently relevant to most autonomous vehicles that use ego-centric sensing.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1843,"This paper presents a learning algorithm that can ""outperform a greedy baseline in terms of efficiency"" and ""humans driving the simulator in terms of safety and success"" within their top view driving game.[learning algorithm-POS], [EMP-POS]",learning algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1847,"It is unclear if the simulator extends beyond a single straight section of highway, as shown in Figure 1.[simulator-NEG, Figure 1-NEG], [CLA-NEG, EMP-NEG]",simulator,Figure 1,,,,,CLA,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 1852,"This makes the high level learning strategy more efficient because it does not have to explore these possibilities (Q-masking).[strategy-POS], [EMP-POS]",strategy,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1853,"The authors claim that this limitation of the simulation is made valid by the ability of the low level controller to incorporate prior knowledge and perfectly limit these actions.[authors-NEU], [EMP-POS]",authors,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 1855,"In terms of evaluation, the authors do not compare their result against any other method.[evaluation-NEG, result-NEU, method-NEG], [CMP-NEG]",evaluation,result,method,,,,CMP,,,,,NEG,NEU,NEG,,,,NEG,,,, 1856,"Instead, using only one set of test parameters, the authors compare their algorithm to a ""greedy baseline"" policy that is specified a ""always try to change lanes to the right until the lane is correct"" then it tries to go as fast as possible while obeying the speed limit and not colliding with any car in front.[algorithm-NEU, baseline-NEU], [CMP-NEU]",algorithm,baseline,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 1857,"It seems that baseline is additionally constrained vs the ego car due to the speed limit and the collision avoidance criteria and is not a fair comparison.[baseline-NEG], [CMP-NEG]",baseline,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1858,"So given a fixed policy and these constraints it is not surprising that it underperforms the Q-masked Q-learning algorithm.[policy-NEG, constraints-NEG], [CMP-NEG]",policy,constraints,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 1859,"With respect to the comparison vs. human operators of the car simulation, the human operators were not experts.[comparison-NEG], [CMP-NEG]",comparison,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1860,"They were only given ""a few trials"" to learn how to operate the controls before the test.[test-NEG], [CMP-NEG]",test,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1861,"It was reported that the human participants ""did not feel comfortable"" with the low level controller on, possibly indicating that the user experience of controlling the car was less than ideal.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1863,"It is possibly not a fair claim to say that human drivers were ""less safe"" but rather that it was difficult to play the game or control the car with the safety module on.[claim-NEG], [EMP-NEG]",claim,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1864,"This could be seen as a game design issue.[issue-NEG], [EMP-NEG]",issue,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1865,"It was not clear from this presentation how the human participants were rewarded for their performance.[presentation-NEG, performance-NEU], [CLA-NEG, EMP-NEU]",presentation,performance,,,,,CLA,EMP,,,,NEG,NEU,,,,,NEG,NEU,,, 1866,"In more typical HCI experiments the gender distribution and ages ranges of participants are specified as well as how participants were recruited and how the game was motivated, including compensation (reward) are specified.[compensation-NEU, reward-NEU], [EMP-NEU]",compensation,reward,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1867,"Overall, this paper presents an overly simplified game simulation with a weak experimental result.[experimental result-NEG], [EMP-NEG]]",experimental result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1870,"The result shows that the proposed method performs better than several hand-designed baselines on two downstream prediction tasks in Starcraft.[result-POS, proposed method-POS, baselines-POS], [EMP-POS]",result,proposed method,baselines,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 1874,"- The proposed method is not much novel.[proposed method-NEG], [NOV-NEG]",proposed method,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 1875,"- The evaluation is a bit limited to two specific downstream prediction tasks.[evaluation-NEG], [SUB-NEG]",evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1876,"# Novelty and Significance - The problem considered in this paper is interesting.[problem-POS, paper-POS], [EMP-POS]",problem,paper,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1877,"- The proposed method is not much novel.[proposed method-NEG], [NOV-NEG]",proposed method,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 1878,"- Overall, this paper is too specific to Starcraft domain + particular downstream prediction tasks.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1879,"It would be much stronger to show the benefit of defogging objective on the actual gameplay rather than prediction tasks.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 1880,"Alternatively, it could be also interesting to consider an RL problem where the agent should reveal the hidden state of the opponent as much/quickly as possible.[problem-NEU], [SUB-NEU]",problem,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 1881,"# Quality - The experimental result is not much comprehensive.[result-NEG], [EMP-NEG]",result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1882,"The proposed method is expected to perform better than hand-designed methods on downstream prediction tasks.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1883,"It would be better to show an in-depth analysis of the learned model or show more results on different tasks (possibly RL tasks rather than prediction tasks).[analysis-NEG], [SUB-NEG]",analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1884,"# Clarity - I did not fully understand the learning objective.[objective-NEG], [CLA-NEG]",objective,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 1885,"Does the model try to reconstruct the state of the current time-step or the future?[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 1886,"The learning objective is not clearly defined.[objective-NEG], [EMP-NEG]",objective,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1887,"In Section 4.1, the target x and y have time steps from t1 to t2.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1888,"What is the range of t1 and t2?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1889,"If the proposed model is doing future prediction, it would be important to show and discuss long-term prediction results.[proposed model-NEU, results-NEU], [IMP-NEU]",proposed model,results,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 1891,"The paper is well written, and the authors do an admirable job of motivating their primary contributions throughout the early portions of the paper.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1892,"Each extension to the Dual Actor-Critic is well motivated and clear in context.[null], [CLA-POS]",null,,,,,,CLA,,,,,,,,,,,POS,,,, 1893,"Perhaps the presentation of these extensions could be improved by providing a less formal explanation of what each does in practice; multi-step updates, regularized against MC returns, stochastic mirror descent.[presentation-NEU], [PNF-NEU]",presentation,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1894,"The practical implementation section losses some of this clear organization, and could certainly be clarified each part tied into Algorithm 1, and this was itself made less high-level. But these are minor gripes overall.[Algorithm-NEU], [EMP-NEG]",Algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1895,"Turning to the experimental section, I think the authors did a good job of evaluating their approach with the ablation study and comparisons with PPO and TRPO.[approach-POS, comparisons-POS], [EMP-POS, CMP-POS]",approach,comparisons,,,,,EMP,CMP,,,,POS,POS,,,,,POS,POS,,, 1897,"The difference in performance for Dual-AC between Figure 1 and Figure 2b is significant, but the only difference seems to be a reduce batch size, is this right?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1898,"This suggests a fairly significant sensitivity to this hyperparameter if so.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1899,"Reproducibility in continuous control is particularly problematic.[Reproducibility-NEG], [IMP-NEG]",Reproducibility,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 1900,"Nonetheless, in recent work PPO and TRPO performance on the same set of tasks seem to be substantively different than what the authors get in their experiments.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1903,"In both these cases the results for PPO and TRPO vary pretty significantly from what we see here, and an important one to look at is the InvertedDoublePendulum-v1 task, which I would think PPO would get closer to 8000, and TRPO not get off the ground.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1904,"Part of this could be the notion of an iteration, which was not clear to me how this corresponded to actual time steps.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1905,"Most likely, to my mind, is that the parameterization used (discussed in the appendix) is improving TRPO and hurting PPO.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1906,"With these in mind I view the comparison results with a bit of uncertainty about the exact amount of gain being achieved,;[comparison results-NEG], [CMP-NEG]",comparison results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1907,"which may beg the question if the algorithmic contributions are buying much for their added complexity?[algorithmic contributions-NEU], [EMP-NEU]",algorithmic contributions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1908,"Pros: Well written, thorough treatment of the approaches.[approaches-POS], [CLA-POS]",approaches,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1909,"Improvements on top of Dual-AC with ablation study show improvement.[Improvements-POS], [EMP-POS]",Improvements,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1910,"Cons: Empirical gains might not be very large.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1913,"In general, I find many of the observations in this paper interesting.[observations-POS], [EMP-POS]",observations,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1914,"However, this paper is not strong enough as a theory paper; rather, the value lies perhaps in its fresh perspective.[paper-NEG, value-NEU], [EMP-NEG]",paper,value,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 1917,"Several simplifying assumptions are introduced, which rendered the implication of the main theorem vague,[assumptions-NEG, main theorem-NEG], [EMP-NEG]",assumptions,main theorem,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 1918,"but it can serve as a good start for the hardcore statistical learning-theoretical analysis to follow.[analysis-POS], [IMP-POS]",analysis,,,,,,IMP,,,,,POS,,,,,,POS,,,, 1919,"The second contribution of the paper is the (empirical) observation that, in terms of sparse recovery of embedded words, the pretrained embeddings are better than random matrices, the latter being the main focus of compressive sensing theory.[contribution-POS], [EMP-POS]",contribution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1920,"Partial explanations are provided, again using results in compressive sensing theory.[explanations-NEU, results-NEU], [CLA-NEU]",explanations,results,,,,,CLA,,,,,NEU,NEU,,,,,NEU,,,, 1921,"In my personal opinion, the explanations are opaque and unsatisfactory.[explanations-NEG], [EMP-NEG, CLA-NEG]",explanations,,,,,,EMP,CLA,,,,NEG,,,,,,NEG,NEG,,, 1924,"My most criticism regarding this paper is the narrow scope on compressive sensing, and this really undermines the potential contribution in Section 5.[Section-NEG, paper-NEG, contribution-NEU], [IMP-NEG]",Section,paper,contribution,,,,IMP,,,,,NEG,NEG,NEU,,,,NEG,,,, 1925,"Specifically, the authors considered only Basis Pursuit estimators for sparse recovery, and they used the RIP of design matrices as the main tool to argue what is explainable by compressive sensing and what is not.[authors-NEG, main tool-NEU], [SUB-NEG]",authors,main tool,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 1926,"This seems to be somewhat of a tunnel-visioning for me: There are a variety of estimators in sparse recovery problems, and there are much less restrictive conditions than RIP of the design matrices that guarantee perfect recovery.[conditions-NEG], [SUB-NEG]",conditions,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1930,"Furthermore, it is proved in the same paper that Restricted Strong Convexity (RSC) alone is enough to guarantee successful recovery; RIP is not required at all.[same paper-NEG], [CMP-NEG]",same paper,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 1931,"While, as the authors argued in Section 5.2, it is easy to see that pretrained embeddings can never possess RIP, they do not rule out the possibility of RSC.[authors-NEU, Section-NEU], [CMP-NEU]",authors,Section,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 1933,"Several minor comments: 1. Please avoid the use of ""information theory"", especially ""classical information theory"", in the current context.[context-NEG], [EMP-NEG]",context,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1934,"These words should be reserved to studies of Channel Capacity/Source Coding `a la Shannon.[words-NEG, studies-NEU], [EMP-NEG]",words,studies,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 1935,"I understand that in recent years people are expanding the realm of information theory, but as compressive sensing is a fascinating field that deserves its own name, there's no need to mention information theory here.[field-NEU], [IMP-NEG]",field,,,,,,IMP,,,,,NEU,,,,,,NEG,,,, 1936,"2. In Theorem 4.1, please be specific about how the l2-regularization is chosen.[Theorem-NEG], [SUB-NEG]",Theorem,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1937,"3. In Section 4.1, please briefly describe why you need to extend previous analysis to the Lipschitz case.[Section-NEG], [SUB-NEG]",Section,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1939,"4. Can the authors briefly comment on the two assumptions in Section 4, especially the second one (on n- cooccurrence)? Is this practical?[assumptions-NEG], [EMP-NEG]",assumptions,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1941,"6. Page 2, first paragraph of related work, the sentence ""Our method also closely related to ..."" is incomplete.[sentence-NEG], [PNF-NEG]",sentence,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1942,"7. Page 2, second paragraph of related work, ""Pagliardini also introduceD a linear ..."" 8.[paragraph-NEG], [PNF-NEG]",paragraph,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1943,"Page 9, conclusion, the beginning sentence of the second paragraph is erroneous.[conclusion-NEG, paragraph-NEG, sentence-NEG], [CLA-NEG]",conclusion,paragraph,sentence,,,,CLA,,,,,NEG,NEG,NEG,,,,NEG,,,, 1947,"Scheme A consists of training a high precision teacher jointly with a low precision student.[Scheme-NEU], [EMP-NEU]",Scheme,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1948,"Scheme B is the traditional knowledge distillation method and Scheme C uses knowledge distillation for fine-tuning a low precision student which was pretrained in high precision mode.[Scheme-NEU], [EMP-NEU]",Scheme,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1949,"Review: The paper is well written. [paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 1950,"The experiments are clear and the three different schemes provide good analytical insights.[analytical insights-POS, experiments-POS], [EMP-POS]",analytical insights,experiments,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 1951,"Using scheme B and C student model with low precision could achieve accuracy close to teacher while compressing the model. [accuracy-NEG], [EMP-NEU]",accuracy,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 1952,"Comments: Tensorflow citation is missing. [citation-NEG], [PNF-NEG]",citation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 1953,"Conclusion is short and a few directions for future research would have been useful.[Conclusion-NEG, future research-NEG], [IMP-NEU]]",Conclusion,future research,,,,,IMP,,,,,NEG,NEG,,,,,NEU,,,, 1955,"The method is an extension of the GloVe method and in the case of a single covariate value the proposed method reduces to GloVe.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1958,"Though not technically difficult, the extension of GloVe to covariate-dependent embeddings is very interesting and well motivated.[extension-POS], [EMP-POS]",extension,,,,,,EMP,,,,,POS,,,,,,POS,,,, 1959,"Some of the experimental results do a good job of demonstrating the advantages of the models.[experimental results-POS, models-NEU], [EMP-POS]",experimental results,models,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 1960,"However, some of the experiments are not obvious that the model is really doing a good job.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1961,"I have some small qualms with the presentation of the method.[presentation-NEU, method-NEU], [PNF-NEU]",presentation,method,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 1962,"First, using the term size m for the number of values that the covariate can take is a bit misleading.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 1963,"Usually the size of a covariate would be the dimensionality.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1964,"These would be the same if the covariate is one hot coded, however, this isn't obvious in the paper right now.[paper-NEU], [EMP-NEG]",paper,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 1965,"Additionally, v_i and c_k live in R^d, however, it's not really explained what 'd' is, is it the number of 'topics', or something else?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1966,"Additionally, the functional form chosen for f() in the objective was chosen to match previous work but with no explanation as to why that's a reasonable form to choose.[previous work-NEU, explanation-NEG], [CMP-NEG]",previous work,explanation,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 1967,"Finally, the authors say toward the end of Section 2 that A careful comparision shows that this approximation is precisely that which is implied by equation 4, as desired.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1969,"Regarding the experiments there needs to be more discussion about how the different model parameters were determined.[experiments-NEU, discussion-NEG, model-NEU], [SUB-NEG]",experiments,discussion,model,,,,SUB,,,,,NEU,NEG,NEU,,,,NEG,,,, 1970,"The authors say ... and after tuning our algorithm to emged this dataset, ..., but this isn't enough.[dataset-NEG], [EMP-NEG]",dataset,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1971,"What type of tuning did you do to choose in particular the latent dimensionality and the learning rate?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1972,"I will detail concerns for the specific experiments below. Section 4.1: - How does held-out data fit into the plot?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1973,"Section 4.2: - For the second embedding, what exactly was the algorithm trained on?[Section-NEU, algorithm-NEU], [EMP-NEU]",Section,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1976,"Are higher or lower values better? Maybe highlight the best scores for each column.[null], [EMP-NEU, PNF-NEU]",null,,,,,,EMP,PNF,,,,,,,,,,NEU,NEU,,, 1977,"Section 4.3: - Many of these distributions don't look sparse.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 1978,"- There is a terminology problem in this section.[terminology-NEG, section-NEG], [PNF-NEG]",terminology,section,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 1979,"Coordinates in a vector are not sparse, the vector itself is sparse if there are many zeros, but coordinates are either zero or not zero.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1980,"The authors' use of 'sparse' when they mean 'zero' is really confusing.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 1981,"- Due to the weird sparsity terminology Table 1 is very confusing.[terminology-NEG, Table-NEG], [PNF-NEG]",terminology,Table,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 1983,"But if so, then these vectors aren't sparse at all as most values are non-zero.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 1984,"Section 5.1: - I don't agree with the authors that the topics in Table 3 are interpretable.[Section-NEU, Table-NEU], [EMP-NEU]",Section,Table,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 1986,"This isn't necessarily a problem, it's fine for models to not do everything well, but it's a stretch for the authors to claim that these results are a positive aspect of the model.[results-NEG, model-NEG], [EMP-POS]",results,model,,,,,EMP,,,,,NEG,NEG,,,,,POS,,,, 1987,"The results in Section 5.2 seem to make a lot of sense and show the big contribution of the model.[results-POS, contribution-POS, model-POS], [EMP-POS, IMP-POS]",results,contribution,model,,,,EMP,IMP,,,,POS,POS,POS,,,,POS,POS,,, 1988,"Section 5.3: - What is the a : b :: c : d notation? [Section-NEU], [PNF-NEU]",Section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 1992,"Concerning the first objective the empirical results do not provide meaningful support that the generative model is really effective.[empirical results-NEG], [EMP-NEG]",empirical results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 1993,"The improvement is really tiny and a statistical test (not included in the analysis) probably wouldn't pass a significant threshold.[improvement-NEU, analysis-NEG], [SUB-NEG, IMP-NEG]",improvement,analysis,,,,,SUB,IMP,,,,NEU,NEG,,,,,NEG,NEG,,, 1994,"This analysis is missing a straw man.[analysis-NEG], [SUB-NEG]",analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 1995,"It is not clear whether the difference in the evaluation measures is related to the greater number of examples or by the specific generative model.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 1996,"Concerning the contribution of the model, one novelty is the conditional formulation of the discriminator.[contribution-NEU], [NOV-NEU]",contribution,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 1997,"The design of the empirical evaluation doesn't address the analysis of the impact of this new formulation.[empirical evaluation-NEG, analysis-NEU], [EMP-NEG]",empirical evaluation,analysis,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 1998,"It is not clear whether the supposed improvement is related to the conditional formulation.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2000,"It is not clear how the authors operated the choices of these figures.[figures-NEU], [EMP-NEU]",figures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2001,"From the perspective of neuroscience a reader, would expect to look at the brain maps for the same collection with different methods.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 2002,"The pairwise brain maps would support the interpretation of the generated data.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2004,"Minor comments - typos: a first application or this > a first application of this (p.2) - qualitative quality (p.2)[typos-NEG], [PNF-NEG]",typos,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2013,"This paper extends the existing results in some subtle ways.[paper-NEU, results-NEU], [EMP-NEU]",paper,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2015,"For (2), the hard functions has a better parameterization, and the gap between 3-layer and 2-layer is proved bigger. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2017,"The stronger results of (1), (2), (4) all rely on the specific piece-wise linear nature of ReLU.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2018,"Other than that, I don't get much more insight from the theoretical result.[theoretical result-NEG], [EMP-NEG]",theoretical result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2019,"When the input dimension is n, the representability result of (1) fails to show that a polynomial number of neurons is sufficient.[result-NEG], [EMP-NEG]",result,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2020,"Perhaps an exponential number of neurons is necessary in the worst case, but it will be more interesting if the authors show that under certain conditions a polynomial-size network is good enough.[null], [SUB-NEG, EMP-NEG]",null,,,,,,SUB,EMP,,,,,,,,,,NEG,NEG,,, 2021,"Result (3) is more interesting as it is a new result.[result-POS], [NOV-POS]",result,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2022,"The authors present a constructive proof to show that ReLU-activated DNN can represent many linear pieces.[proof-NEU], [EMP-NEU]",proof,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2023,"However, the construction seems artificial and these functions don't seem to be visually very complex.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2024,"Overall, this is an incremental work in the direction of studying the representation power of neural networks.[work-POS], [NOV-POS]",work,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2025,"The results might be of theoretical interest,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2026,"but I doubt if a pragmatic ReLU network user will learn anything by reading this paper.[paper-NEG], [IMP-NEG]",paper,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 2031,"I'm also not sur to see much differences with the previous work by Haarnoja et al and Schulman et al.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2034,"The authors say they compare to DQfD but the last version of this method makes use of prioritized replay so as to avoid reusing too much the expert transitions and overfit (L2 regularization is also used).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2035,"It seems this has not been implemented for comparison and that overfitting may come from this method missing.[comparison-NEG], [SUB-NEG, EMP-NEU]",comparison,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEU,,, 2036,"I'm also uncomfortable with the way most of the expert data are generated for experiments.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2037,"Using data generated by a pre-trained network is usually not representative of what will happen in real life.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2038,"Also, corrupting actions with noise in the replay buffer is not simulating correctly what would happen in reality.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2039,"Indeed, a single error in some given state will often generate totally different trajectories and not affect a single transition.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2040,"So imperfect demonstration have very typical distributions.[demonstration-NEG], [EMP-NEG]",demonstration,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2041,"I acknowledge that some real human demonstrations are used but there is not much about them and the experiment is very shortly described. [demonstrations-NEG, experiment-NEG], [SUB-NEG]",demonstrations,experiment,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 2043,"The main insight in this paper is that LSTMs can be viewed as producing a sort of sketch of tensor representations of n-grams.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2044,"This allows the authors to design a matrix that maps bag-of-n-gram embeddings into the LSTM embeddings.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2047,"I didn't check all the proof details, but based on my knowledge of compressed sensing theory, the results seem plausible.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2048,"I think the paper is a nice contribution to the theoretical analysis of LSTM word embeddings.[contribution-POS], [EMP-POS, IMP-POS]",contribution,,,,,,EMP,IMP,,,,POS,,,,,,POS,POS,,, 2052,"This paper establishes an interesting connection between least squares population loss and Hermite polynomials.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2053,"Following from this connection authors propose a new loss function.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 2054,"Interestingly, they are able to show that the loss function globally converges to the hidden weight matrix, Simulations confirm the findings.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2055,"Overall, pretty interesting result and solid contribution.[result-POS, contribution-POS], [EMP-POS, IMP-POS]",result,contribution,,,,,EMP,IMP,,,,POS,POS,,,,,POS,POS,,, 2056,"The paper also raises good questions for future works.[future works-POS], [IMP-POS]",future works,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2058,"In summary, I recommend acceptance.[null], [REC-POS]",null,,,,,,REC,,,,,,,,,,,POS,,,, 2059,"The paper seems rushed to me so authors should polish up the paper and fix typos.[typos-NEG], [CLA-NEG]",typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2060,"Two questions: 1) Authors do not require a^* to recover B^*. Is that because B^* is assumed to have unit length rows?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2061,"If so they should clarify this otherwise it confuses the reader a bit.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2062,"2) What can be said about rate of convergence in terms of network parameters?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2063,"Currently a generic bound is employed which is not very insightful in my opinion.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2066,"The combination of temporal logic formulas with reinforcement learning was developed previously in the literature, and the main contribution of this paper is for fast skill composition.[contribution-NEU], [EMP-NEU, IMP-NEU]",contribution,,,,,,EMP,IMP,,,,NEU,,,,,,NEU,NEU,,, 2067,"The system uses logic formulas in truncated linear temporal logic (TLTL), which lacks an Always operator and where the LTL formula (A until B) also means that B must eventually hold true. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2068,"The temporal truncation also requires the use of a specialized MDP formulation with an explicit and fixed time horizon T.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2069,"The exact relationship between the logical formulas and the stochastic trajectories of the MDP is not described in detail here, but relies on a robustness metric, rho.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2070,"The main contributions of the paper are to provide a method that converts a TLTL formula that specifies a task into a reward function for a new augmented MDP (that can be used by a conventional RL algorithm to yield a policy), and a method for quickly combining two such formulas (and their policies) into a new policy.[main contributions-NEU, method-NEU], [EMP-NEU]",main contributions,method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2072,"The main problem with this paper is that the connections between the TLTL formulas and the conventional RL objectives are not made sufficiently clear.[main problem-NEG], [EMP-NEG]",main problem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2073,"The robustness term rho is essential, but it is not defined.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2074,"I was also confused by the notation $D_phi^q$, which was described but not defined.[notation-NEG], [PNF-NEG]",notation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2076,"The fact that there may be many policies which satisfy a particular reward function (or TLTL formula) is ignored.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2077,"This means that skill composition that is proposed in this paper might be quite far from the best policy that could be learned directly from a single conjunctive TLTL formula.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2078,"It is unclear how this approach manages tradeoffs between objectives that are specified as a conjunction of TLTL goals.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2079,"is it better to have a small probability of fulfilling all goals, or to prefer a high probability of fulfilling half the goals?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2080,"In short the learning objectives of the proposed composition algorithm are unclear after translation from TLTL formulas to rewards. [algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2083,"Results are provided in Section 4 for linear networks and in Section 5 for nonlinear networks.[Results-NEU, Section-NEU], [PNF-NEU]",Results,Section,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 2084,"Results for deep linear neural networks are puzzling.[Results-NEG], [CLA-NEG]",Results,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2086,"So results in Section 4 are just results for linear regression and I do not understand why the number of layers come into play?[results-NEU, Section-NEU], [EMP-NEU]",results,Section,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2087,"Also this is never explicitly mentioned in the paper, I guess the authors make an assumption that the samples (x_i,y_i) are drawn i.i.d. from a given distribution D.[paper-NEG, assumption-NEU], [SUB-NEG, EMP-NEG]",paper,assumption,,,,,SUB,EMP,,,,NEG,NEU,,,,,NEG,NEG,,, 2088,"In such a case, I am sure results on the population risk minimization can be found for linear regression and should be compare to results in Section 4. [results-NEU, Section-NEU], [CMP-NEU]",results,Section,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 2090,"I very much appreciate the objectives of this paper: learning compositional structures is critical for scaling and transfer.[objectives-POS], [EMP-POS]",objectives,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2092,"Some previous work is cited, but I would point the authors to much older work of Parr and Russell on HAMs (hierarchies of abstract machines) and later work by Andre and Russell, which did something very similar (though, indeed, not in hybrid domains).[previous work-NEU], [CMP-NEU]",previous work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2093,"The idea of extracting policies corresponding to individual automaton states and making them into options seems novel,[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2094,"but it would be important to argue that those options are likely to be useful again under some task distribution.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2095,"The second part offers an exciting result: If we learn policy pi_1 to satisfy objective phi_1 and policy pi_2 to satisfy objective phi_2, then it will be possible to switch between pi_1 and pi_2 in a way that satisfies phi_1 ^ phi_2.[result-POS], [EMP-POS]",result,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2096,"This just doesn't make sense to me.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2097,"What if phi_1 is o ((A v B) Until C) and phi_2 is o ((not A v B) Until C).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2099,"However, we may find policy pi_1 that makes A true and B false (in general, there is no single optimal policy) and find pi_2 that makes A false and B false, and it will not be possible to satisfy the phi_1 and phi_2 by switching between the policies.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2101,"Some other smaller points: - zero-shot skill composition sounds a lot like what used to be called planning or reasoning[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2102,"- The function rho is originally defined on whole trajectories but in eq 7 it is only on a single s': I'm not sure exactly what that means.[eq-NEU], [EMP-NEG]",eq,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2103,"- Section 4: How is as soon as possible encoded in this objective?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2104,"- How does the fixed horizon interact with conjoining goals?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2105,"- There are many small errors in syntax; it would be best to have this paper carefully proofread.[syntax-NEG], [CLA-NEG, PNF-NEG]",syntax,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 2108,"*Quality* The problem addressed is surely relevant in general terms.[problem-POS], [APR-POS]",problem,,,,,,APR,,,,,POS,,,,,,POS,,,, 2109,"However, the contributed framework did not account for previously proposed metrics (such as equivariance, invariance and equivalence).[framework-NEG], [CMP-NEG]",framework,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2110,"Within the experimental results, only two methods are considered: although Info-GAN is a reliable competitor, PCA seems a little too basic to compete against.[experimental results-NEG], [SUB-NEG, CMP-NEG]",experimental results,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 2112,"Finally, in order to corroborate the quantitative results, authors should have reported some visual experiments in order to assess whether a change in c_j really correspond to a change in the corresponding factor of variation z_i according to the learnt monomial matrix.[results-NEU, experiments-NEU], [SUB-NEU]",results,experiments,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2113,"*Clarity* The explanation of the theoretical framework is not clear.[explanation-NEG, theoretical framework-NEG], [CLA-NEG]",explanation,theoretical framework,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2114,"In fact, Figure 1 is straight in identifying disentanglement and completeness as a deviation from an ideal bijective mapping.[Figure-NEU], [CLA-NEU, PNF-NEU]",Figure,,,,,,CLA,PNF,,,,NEU,,,,,,NEU,NEU,,, 2115,"But, then, the authors missed to clarify how the definitions of D_i and C_j translate this requirement into math.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2116,"Also, the criterion of informativeness of Section 2 is split into two sub-criteria in Section 3.3, namely test set NRMSE and Zero-Shot NRMSE: such shift needs to be smoothed and better explained, possibly introducing it in Section 2.[Section-NEU], [PNF-NEU, EMP-NEU]",Section,,,,,,PNF,EMP,,,,NEU,,,,,,NEU,NEU,,, 2118,"*Significance* The significance of the proposed evaluation framework is not fully clear.[significance-NEG], [IMP-NEG]",significance,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 2119,"The initial assumption of considering factors of variations related to graphics-generated data undermines the relevance of the work.[assumption-NEG, work-NEG], [EMP-NEG]",assumption,work,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 2120,"Actually, authors only consider synthetic (noise-free) data belonging to one class only, thus not including the factors of variations related to noise and/or different classes.[data-NEG], [SUB-NEG]",data,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2121,"PROS: The problem faced by the authors is interesting.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2122,"CONS: The criteria of disentanglement, informativeness & completeness are not fully clear as they are presented..[criteria-NEG], [CLA-NEG]",criteria,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2124,"Thus, it is not possible to elicit from the paper to which extent they are novel or how they are related...[null], [NOV-NEG, CMP-NEG]",null,,,,,,NOV,CMP,,,,,,,,,,NEG,NEG,,, 2125,"The dataset considered is noise-free and considers one class only.[dataset-NEG], [SUB-NEG]",dataset,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2126,"Thus, several factors of variation are excluded a priori and this undermines the significance of the analysis.[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2127,"The experimental evaluation only considers two methods, comparing Info-GAN, a state-of-the-art method, with a very basic PCA.[experimental evaluation-NEG], [SUB-NEG, EMP-NEG]",experimental evaluation,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 2128,"**FINAL EVALUATION** The reviewer rates this paper with a weak reject due to the following points.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 2130,"2) There are two flaws in the experimental validation:.[experimental validation-NEG], [EMP-NEG]",experimental validation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2131,"t2.1) The number of methods in comparison (InfoGAN and PCA) is limited.[methods-NEG], [SUB-NEG, CMP-NEG]",methods,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 2132,"t2.2) A synthetic dataset is only considered.[dataset-NEG], [EMP-NEG, SUB-NEG]",dataset,,,,,,EMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 2133,"The reviewer is favorable in rising the rating towards acceptance if points 1 and 2 will be fixed.[rating-NEU], [REC-NEU]",rating,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 2135,"In particular, with respect to the highlighted points 1 and 2, point 1 has been thoroughly answered and the novelty with respect previous work is now clearly stated in the paper.[previous work-POS], [NOV-POS]",previous work,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2136,"Despite the same level of clarification has not been reached for what concerns point 2, the proposed framework (although still limited in relevance due to the lack of more realistic settings) can be useful for the community as a benchmark to verify the level of disentanglement than newly proposed deep architectures can achieve.[proposed framework-NEU], [IMP-NEU]",proposed framework,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 2137,"Finally, by also taking into account the positive evaluation provided by the fellow reviewers, the rating of the paper has been risen towards acceptance. .[rating-POS], [REC-POS]",rating,,,,,,REC,,,,,POS,,,,,,POS,,,, 2149,"Experiments show that FTP often outperforms saturated STE on CIFAR and ImageNet with sign and quantized activation functions, reaching levels of performance closer to the full-precision activation networks.[Experiments-POS, performance-POS], [EMP-POS, CMP-POS]",Experiments,performance,,,,,EMP,CMP,,,,POS,POS,,,,,POS,POS,,, 2150,"This paper's ideas are very interesting, exploring an alternative training method to backpropagation that supports hard-threshold activation functions.[ideas-POS, method-POS], [EMP-POS]",ideas,method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2151,"The experimental results are encouraging,[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2152,"though I have a few questions below that prevent me for now from rating the paper higher.[questions-NEG, paper-NEG], [CLA-NEG]",questions,paper,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2153,"Comments and questions: 1) How computationally expensive is FTP?[computationally expensive-NEU], [EMP-NEU]",computationally expensive,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2154,"The experiments using ResNet indicate it is not prohibitively expensive, but I am eager for more details.[experiments-POS, details-NEG], [SUB-NEG, EMP-POS]",experiments,details,,,,,SUB,EMP,,,,POS,NEG,,,,,NEG,POS,,, 2155,"2) Does (Hubara et al., 2016) actually compare their proposed saturated STE with the orignal STE on any tasks?[tasks-NEU], [CMP-NEU]",tasks,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2156,"I do not see a comparison.[comparison-NEG], [CMP-NEG]",comparison,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2157,"If that is so, should this paper also compare with STE?[paper-NEG], [CMP-NEU]",paper,,,,,,CMP,,,,,NEG,,,,,,NEU,,,, 2158,"How do we know if generalizing saturated STE is more worthwhile than generalizing STE?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2159,"3) It took me a while to understand the authors' subtle comparison with target propagation, where they say Our framework can be viewed as an instance of target propagation that uses combinatorial optimization to set discrete targets, whereas previous approaches employed continuous optimization.[comparison-NEG], [CMP-NEG]",comparison,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2160,"It seems that the difference is greater than explicitly stated, that prior target propagation used continuous optimization to set *continuous targets*. (One could imagine using continuous optimization to set discrete targets such as a convex relaxation of a constraint satisfaction problem.)[difference-NEG], [CMP-NEG]",difference,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2161,"Focusing on discrete targets gains the benefits of quantized networks.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2162,"If I am understanding the novelty correctly, it would strengthen the paper to make this difference clear.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 2163,"4) On a related note, if feasible target propagation generalizes saturated straight through estimation, is there a connection between (continuous) target propagation and the original type of straight through estimation?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2164,"5) In Table 1, the significance of the last two columns is unclear.[Table-NEG], [CLA-NEG]",Table,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2165,"It seems that ReLU and Saturated ReLU are included to show the performance of networks with full-precision activation functions (which is good).[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2166,"I am unclear though on why they are compared against each other (bolding one or the other) and if there is some correspondence between those two columns and the other pairs, i.e., is ReLU some kind of analog of SSTE and Saturated ReLU corresponds to FTP-SH somehow?[null], [CMP-POS]]",null,,,,,,CMP,,,,,,,,,,,POS,,,, 2167,"This paper proposed the new approach for feature upsampling called pixel deconvolution, which aims to resolve checkboard artifact of conventional deconvolution.[approach-POS], [NOV-POS]",approach,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2168,"By sequentially applying a series of decomposed convolutions, the proposed method explicitly enforces the model to consider the relation between pixels thus effectively improve the deconvolution network with an increased computational cost to some extent.[proposed method-POS, model-POS], [EMP-POS]",proposed method,model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2169,"Overall, the paper is clearly written and easy to understand the main motivation and methods.[paper-POS, motivation and methods-POS], [CLA-POS]",paper,motivation and methods,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 2170,"However, the checkboard artifact is a well-known problem of deconvolution network, and has been addressed by several approaches which are simpler than the proposed pixel deconvolution.[problem-NEU, approaches-NEU], [CMP-NEG]",problem,approaches,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 2171,"For example, it is well known that simple bilinear interpolation optionally followed by convolutions effectively removes checkboard artifact to some extent, and bilinear additive upsampling proposed in Wonja et al., 2017 also demonstrated its effectiveness as an alternative for deconvolution.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 2172,"Comparisons against these approaches would make the paper stronger.[Comparisons-NEU, approaches-NEU, paper-NEU], [CMP-NEU]",Comparisons,approaches,paper,,,,CMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 2173,"Besides, comparisons/discussions based on extensive analysis on various deconvolution architectures presented in Wonja et al., 2017 would also be interesting.[comparisons/discussions-NEU, analysis-NEU], [CMP-NEU, SUB-NEU]",comparisons/discussions,analysis,,,,,CMP,SUB,,,,NEU,NEU,,,,,NEU,NEU,,, 2176,"The motivation has certainly been clarified, but in my opinion it is still hazy.[motivation-POS, opinion-NEG], [CLA-POS, EMP-POS, REC-NEU]",motivation,opinion,,,,,CLA,EMP,REC,,,POS,NEG,,,,,POS,POS,NEU,, 2178,"So I think that the motivation behind introducing this specific difference should be clear.[motivation-NEG], [CLA-NEG]",motivation,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2179,"Is it to save the additional (small) overhead of using skip connections?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2180,"Nevertheless, the additional experiments and clarifications are very welcome.[experiments-NEG, clarifications-NEG], [CLA-NEG]",experiments,clarifications,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2182,"In that report alpha_l is a scalar instead of a vector.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2183,"Although it is interesting, the above case case also calls into question the additional value brought by the use of constrained optimization, a main contribution of the paper.[main contribution-NEU], [EMP-NEU]",main contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2184,"In light of the above, I have increased my score since I find this to be an interesting approach, but in my opinion the significance of the results as they stand is low.[score-POS], [REC-POS, EMP-NEG]",score,,,,,,REC,EMP,,,,POS,,,,,,POS,NEG,,, 2185,"The paper demonstrates that it is possible to obtain very deep plain networks (without skip connections) with improved performance through the use of constrained optimization that gradually removes skip connections, but the value of this demonstration is unclear because a) consistent improvements over past work or the lambda 0 case were not found,[paper-POS, past work-NEG], [EMP-POS, NOV-NEG, CMP-NEG, SUB-NEG]",paper,past work,,,,,EMP,NOV,CMP,SUB,,POS,NEG,,,,,POS,NEG,NEG,NEG, 2186,"and b) The technique still relies on skip connections in a sense so it's not clear that it suggests a truly different method of addressing the degradation problem.[method-NEG], [NOV-NEG]",method,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2187,"Original Review Summary: The contribution of this paper is a method for training deep networks such that skip connections are present at initialization, but gradually removed during training, resulting in a final network without any skip connections.[contribution-NEU, method-NEU], [EMP-NEU]",contribution,method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2188,"The paper first proposes an approach based on a formulation of deep networks with (non-parameterized, non-gated) skip connections with an equality constraint that effectively removes the skip connections when satisfied.[paper-POS, approach-POS], [NOV-POS]",paper,approach,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 2189,"It is proposed to optimize the formulation using the method of Lagrange multipliers.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2192,"Quality and significance: The proposed methodology is simple and straightforward.[proposed methodology-POS], [EMP-POS]",proposed methodology,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2193,"The analysis with the toy network is interesting and helps illustrate the method.[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2194,"However, my main concerns with this paper are related to motivation and experiments.[motivation-NEG, experiments-NEG], [EMP-NEG]",motivation,experiments,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 2195,"The motivation of the work is not clear at all.[motivation-NEG], [CLA-NEG]",motivation,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2196,"The stated goal is to address some of the issues related to the role of depth in deep networks, but I think it should be clarified which specific issues in particular are relevant to this method and how they are addressed.[method-NEG], [CLA-NEG]",method,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2197,"One could additionally consider that removing the skip connections at the end of training reduces the computational expense (slightly), but beyond that the expected utility of this investigation is very hazy from the description in the paper.[description-NEG, paper-NEG], [CLA-NEG, IMP-NEG]",description,paper,,,,,CLA,IMP,,,,NEG,NEG,,,,,NEG,NEG,,, 2198,"For MNIST and MNIST-Fashion experiments, the motivation is mentioned to be similar to Srivastava et al. (2015), but in that study the corresponding experiment was designed to test if deeper networks could be optimized.[motivation-NEG, experiment-NEU], [CLA-NEG]",motivation,experiment,,,,,CLA,,,,,NEG,NEU,,,,,NEG,,,, 2199,"Here, the generalization error is measured instead, which is heavily influenced by regularization.[error-NEU], [EMP-NEU]",error,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2200,"Moreover, only some architectures appear to employ batch normalization, which is a potent regularizer.[architectures-NEU], [EMP-NEU]",architectures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2201,"The general difference between plain and non-plain networks is very likely due to optimization difficulties alone, and due to the above issues further comparisons can not be made from the results.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2202,"For the CIFAR experiments, the experiment design is reasonable for a general comparison.[design-POS], [CMP-POS, EMP-POS]",design,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 2203,"Similar experimental setups have been used in previous papers to report that a proposed method can achieve good results, but there is no doubt that this does not make a rigorous comparison without employing expensive hyper-parameter searches.[experimental setups-NEG, previous papers-NEG, proposed method-NEG, results-NEU], [EMP-NEG]",experimental setups,previous papers,proposed method,results,,,EMP,,,,,NEG,NEG,NEG,NEU,,,NEG,,,, 2205,"Nevertheless, it is important to note that direct comparison should not be made among approaches with key differences.[comparison-NEG, approaches-NEG], [CMP-NEG]",comparison,approaches,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 2206,"For the reported results, Fitnets and Highway Networks did not use Batch Normalization (which is a powerful regularizer) while VANs and Resnets do.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2207,"Moreover, it is important to report the training performance of deeper VANs (which have a worse generalization error) to clarify if the VANs suffered difficulties in optimization or generalization.[performance-NEU], [SUB-NEU]",performance,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2208,"Clarity: The paper is generally well-written and easy to read.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2209,"There are some clarity issues related to the use of the term activation function and a typo in an equation but the authors are already aware of these.[typo-NEG], [CLA-NEG]]",typo,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2216,"Strong points --- + The proposed approach is simple and largely intuitive: essentially the context matrix allows word-specific contextualization.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2217,"Further, the work is clearly presented.[work-POS], [CLA-POS]",work,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2218,"+ At the very least the model does seem comparable in performance to various recent methods (as per Table 2), however as noted below the gains are marginal and I have some questions on the setup.[model-POS, performance-NEU], [EMP-NEU]",model,performance,,,,,EMP,,,,,POS,NEU,,,,,NEU,,,, 2219,"+ The authors perform ablation experiments, which are always nice to see.[ablation experiments-POS], [EMP-POS]",ablation experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2220,"Weak points --- - I have a critical question for clarification in the experiments. [experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2221,"The authors write 'Optimal hyperparameters are tuned with 10% of the training set on Yelp Review Full dataset, and identical hyperparameters are applied to all datasets' -- is this true for *all* models, or only the proposed approach?[proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2222,"- The gains here appear to be consistent, but they seem marginal.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2223,"The biggest gain achieved over all datasets is apparently .7, and most of the time the model very narrowly performs better (.2-.4 range).[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2224,"Moreoever, it is not clear if these results are averaged over multiple runs of SGD or not (variation due to initialization and stochastic estimation can account for up to 1 point in variance -- see A sensitivity analysis of (and practitioners guide to) CNNs... Zhang and Wallace, 2015.)[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2225,"- The related work section seems light.[related work-NEG], [CMP-NEG, SUB-NEG]",related work,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 2226,"For instance, there is no discussion at all of LSTMs and their application to text classificatio (e.g., Tang et al., EMNLP 2015) -- although it is noted that the authors do compare against D-LSTM, or char-level CNNs for the same (see Zhang et al., NIPs 2015). Other relevant work not discussed includes Iyyer et al. (ACL 2015). In their respective ways, these papers address some of the same issues the authors consider here.[discussion-NEG], [SUB-NEG, CMP-NEG]",discussion,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 2227,"- The two approaches to inducing the final region embedding (word-context and then context-word in sections 3.2 and 3.3, respectively) feel a bit ad-hoc.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2228,"I would have appreciated more intuition behind these approaches.[intuition-NEU], [SUB-NEU]",intuition,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2229,"Small comments --- There is a typo in Figure 4 -- Howerver should be However[typo-NEG], [CLA-NEG]",typo,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2230,"*** Update after author response *** Thanks to the authors for their responses. My score is unchanged.[score-NEU], [REC-NEU]",score,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 2236,"The paper is well written and provides some new insights on incorporating kernels in CNN.[paper-POS, insights-POS], [CLA-POS, NOV-POS]",paper,insights,,,,,CLA,NOV,,,,POS,POS,,,,,POS,POS,,, 2237,"The kernel matrix in Eq. 5 is not symmetric and the kernel function in Eq. 3 is not defined over a pair of inputs.[Eq-NEG], [EMP-NEG]",Eq,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2238,"In this case, the projections of the data via the kernel are not necessarily in a RKHS.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2239,"The connection between Hilbert maps and RKHS in that sense is not clear in the paper.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2242,"It is not clear how this issue is addressed in this paper.[issue-NEG], [SUB-NEG]",issue,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2243,"In section 2.2, how mu_i and sigma_i are computed?[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2244,"How the proposed approach can be compared to convolutional kernel networks (NIPS paper) of Mairal et al. (2014)?[proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2249,"The paper is generally well written and most details for reproducibility are seem enough[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2251,". It is of course not entirely surprising that the system can be trained but that there is some form of generalization happening.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2253,". I suspect in most cases practical systems will be adapted with many subsequent levels of preprocessing, ensembling, non-standard data and a number of optimization and architectural tricks that are developer dependent.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2254,"It is really hard to say what a supervised learning meta-model approach such as the one presented in this work have to say about that case[work-NEU], [EMP-NEU]",work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2255,". I have found it hard to understand what table 3 in section 4.2 actually means[table-NEG, section-NEG], [CLA-NEG]",table,section,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2256,". It seems to say for instance that a model is trained on 2 and 3 layers then queried with 4 and the accuracy only slightly drops[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2257,". Accuracy of what ? Is it the other attributes ? Is it somehow that attribute ? if so how can that possibly ? [Accuracy-NEU], [EMP-NEU]",Accuracy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2258,"My main main concern is extrapolation out of the training set which is particularly important here. I don't find enough evidence in 4.2 for that point.[evidence-NEG], [EMP-NEG, SUB-NEG]",evidence,,,,,,EMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 2259,"One experiment that i would find compelling is to train for instance a meta model on S,V,B,R but not D on imagenet, predict all the attributes except architecture and see how that changes when D is added[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2260,". If these are better than random and the perturbations are more successful it would be a much more compelling story. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2262,"Paper is well written and clearly explained.[Paper-POS], [CLA-POS]",Paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2263,"The paper is a experimental paper as it has more content on the experimentation[paper-NEU, experimentation-NEU], [SUB-NEU]",paper,experimentation,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2264,"and less content on problem definition and formulation.[problem-NEU], [SUB-NEU]",problem,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2265,"The experimental section is strong and it has evaluated across different datasets and various scenarios.[experimental section-POS, datasets-NEU], [SUB-POS, EMP-POS]",experimental section,datasets,,,,,SUB,EMP,,,,POS,NEU,,,,,POS,POS,,, 2266,"However, I feel the contribution of the paper toward the topic is incremental and not significant enough to be accepted in this venue.[contribution-NEG], [REC-NEG, APR-NEG, IMP-NEG]",contribution,,,,,,REC,APR,IMP,,,NEG,,,,,,NEG,NEG,NEG,, 2267,"It only considers a slight modification into the loss function by adding a trace norm regularization.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2270,"As a result, a gradient descent algorithm converges to the unique solution.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2272,"While it is clearly written, my main concern is whether this model is significant enough.[model-NEU], [CLA-POS, IMP-NEU]",model,,,,,,CLA,IMP,,,,NEU,,,,,,POS,NEU,,, 2273,"The assumptions K 2 and v1 v2 1 reduces the difficulty of the analysis,[assumptions-POS, analysis-NEU], [EMP-POS]",assumptions,analysis,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 2274,"but it makes the model considerably simpler than any practical setting. [model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2278,"The proposed model and method are reasonably original and novel.[proposed model-POS, method-POS], [NOV-POS]",proposed model,method,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 2279,"The paper is well written and the method reasonably well explained[paper-POS, method-POS], [CLA-POS, EMP-POS]",paper,method,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 2280,"(I would add an explanation of the spectral estimation in the Appendix, rather than just citing Rodu et al. 2013).[explanation-NEU, Appendix-NEU], [SUB-NEU]",explanation,Appendix,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2281,"Additional experimental results would make it a stronger paper.[experimental results-NEU], [SUB-NEU]",experimental results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2282,"It would be great if the authors could include the code that implements the model.[code-NEU, model-NEU], [SUB-NEU]",code,model,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2289,"I found the first contribution is sound, and it reasonably explains why RAML achieves better performance when measured by a specific metric.[contribution-POS, performance-POS], [EMP-POS]",contribution,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2293,"Of course, the moving-out is biased but the replacing is unbiased.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2294,"The second contribution is partially valid,[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2295,"although I doubt how much improvement one can get from SQDML.[improvement-NEU], [EMP-NEU]",improvement,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2297,"In fact, this step can result in biased estimation because the replacement is inside the nonlinear function.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2298,"When x is repeated sufficiently in the data, this bias is small and improvement can be observed, like in the synthetic data example.[improvement-NEU], [EMP-NEU]",improvement,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2299,"However, when x is not repeated frequently, both RAML and SQDML are biased.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2300,"Experiment in section 4.1.2 do not validate significant improvement, either.[Experiment-NEU, section-NEU, improvement-NEG], [EMP-NEG]",Experiment,section,improvement,,,,EMP,,,,,NEU,NEU,NEG,,,,NEG,,,, 2301,"The numerical results are relatively weak.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2303,"However, from Figure 2, we can see that the result is quite sensitive to the temperature tau.[Figure-NEU, result-NEU], [EMP-NEU]",Figure,result,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2304,"Is there any guidelines to choose tau?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2305,"For experiments in Section 4.2, all of them are to show the effectiveness of RAML, which are not very relevant to this paper.[experiments-NEU, Section-NEU], [EMP-NEG]",experiments,Section,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 2306,"These experiment results show very small improvement compared to the ML baselines (see Table 2,3 and 5).[experiment results-NEG, improvemenT-NEG, baselines-NEU, Table-NEU], [EMP-NEG]",experiment results,improvemenT,baselines,Table,,,EMP,,,,,NEG,NEG,NEU,NEU,,,NEG,,,, 2307,"These results are also lower than the state of the art performance.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2308,"A few questions: (1). The author may want to check whether (8) can be called a Bayes decision rule.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2309,"This is a direct result from definition of conditional probability.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2310,"No Bayesian elements, like prior or likelihood appears here.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2311,"(2). In the implementation of SQDML, one can sample from (15) without exactly computing the summation in the denominator.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2312,"Compared with the n-gram replacement used in the paper, which one is better?[null], [CMP-NEU, EMP-NEU]",null,,,,,,CMP,EMP,,,,,,,,,,NEU,NEU,,, 2313,"(3). The authors may want to write Eqn. 17 in the same conditional form of Eqn. 12 and Eqn. 14.[Eqn-NEU], [PNF-NEU]",Eqn,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2314,"This will make the comparison much more clear.[comparison-NEU], [EMP-NEU]",comparison,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2315,"(4). What is Theorem 2 trying to convey?[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2316,"Although tau goes to 0, there is still a gap between Q and Q'.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2317,"This seems to suggest that for small tau, Q' is not a good approximation of Q.[approximation-NEG], [EMP-NEG]",approximation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2318,"Are the assumptions in Theorem 2 reasonable?[assumptions-NEU, Theorem-NEU], [EMP-NEU]",assumptions,Theorem,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2319,"There are several typos in the proof of Theorem 2.[typos-NEG, Theorem-NEG], [CLA-NEG]",typos,Theorem,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2321,"Could you explain it in more details? [details-NEU], [SUB-NEU]",details,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2326,". I had a couple of reservations however: * The empirical improvements from the method seem pretty marginal, to the point that it's difficult to know what is really helping the model.[empirical improvements-NEG], [EMP-NEU]",empirical improvements,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 2327,"I would liked to have seen more explanation of what the model has learned, and more comparisons to other baselines that make use of attention over spans.[explanation-NEU, comparisons-NEU], [EMP-NEU, SUB-NEU, CMP-NEU]",explanation,comparisons,,,,,EMP,SUB,CMP,,,NEU,NEU,,,,,NEU,NEU,NEU,, 2328,"For example, what happens if every span is considered as an independent random variable, with no use of a tree structure or the CKY chart?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2329,"* The use of the alpha^0 vs. alpha^1 variables is not entirely clear.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 2330,"Once they have been calculated in Algorithm 1, how are they used?[Algorithm-NEU], [EMP-NEU]",Algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2331,"Do the rho values somewhere treat these two quantities differently?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2332,"* I'm skeptical of the type of qualitative analysis in section 4.3, unfortunately.[qualitative analysis-NEU], [EMP-NEU]",qualitative analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2333,"I think something much more extensive would be interesting here. As one example, the PP attachment example with at a large venue is highly suspect; there's a 50/50 chance that any attachment like this will be correct, there's absolutely no way of knowing if the model is doing something interesting/correct or performing at a chance level, given a single example. [model-NEU], [EMP-NEG]",model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2339,"The paper suggests an interesting approach and provides experimental evidence for its usefulness, especially for multi-layer AEs.[approach-POS, experimental evidence-POS], [EMP-POS, NOV-POS]",approach,experimental evidence,,,,,EMP,NOV,,,,POS,POS,,,,,POS,POS,,, 2340,"Some comments on the theoretical part: -The theoretical part is partly misleading.[theoretical part-NEG], [EMP-NEG]",theoretical part,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2341,"While it is true that every layer can be treated a generalized linear model, the SLQC property only applies for the last layer.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2342,"Regarding the intermediate layers, we may indeed treat them as generalized linear models, but with non-monotone activations, and therefore the SLQC property does not apply.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2343,"The authors should mention this point.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2344,"-Showing that generalized ReLU is SLQC with a polynomial dependence on the domain is interesting.[generalized ReLU-POS], [EMP-POS]",generalized ReLU,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2345,"-It will be interesting if the authors can provide an analysis/relate to some theory related to alternating minimization of bi-quasi-convex objectives.[analysis-NEU], [SUB-NEU]",analysis,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2346,"Concretely: Is there any known theory for such objectives?[theory-NEU, objectives-NEU], [CMP-NEU]",theory,objectives,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 2347,"What guarantees can we hope to achieve?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2348,"The extension to muti-layer AEs makes sense and seems to works quite well in practice[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2349,". The experimental part is satisfactory, and seems to be done in a decent manner.[experimental part-POS], [EMP-POS]",experimental part,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2350,"It will be useful if the authors could relate to the issue of parameter tuning for their algorithm.[issue-NEU, algorithm-NEU], [EMP-NEU]",issue,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2351,"Concretely: How sensitive/robust is their approach compared to SGD with respect to hyperparameter misspecification. [approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2354,"Inspired by recent works on large-batch studies, the paper suggests to adapt the learning rate as a function of the batch size.[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2355,"I am interested in the following experiment to see how useful it is to increase the batch size compared to fixed batch size settings.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2356,"1) The total budget / number of training samples is fixed.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2361,"5) Learning rates and their drops should be rescaled taking into account the schedule of the batch size and the rules to adapt learning rates in large-scale settings as by Goyal.[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2364,"The paper still needs some work on clarity, and authors defer the changes to the next version (but as I understood, they did no changes for this paper as of now), which is a bit frustrating.[clarity-NEU], [CLA-NEG]",clarity,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 2365,"However I am fine accepting it.[null], [REC-POS]",null,,,,,,REC,,,,,,,,,,,POS,,,, 2368,"Authors demonstrate empirically that this particular learning problem is hard for SGD with l2 loss (due to apparently bad local optima) and suggest two ways of addressing it, on top of the known way of dealing with this problem (which is overparameterization).[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2370,"Overall the paper is well written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2372,"I do find interesting the formulation of population risk in terms of tensor decomposition, this is insightful[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2373,"My issues with the paper are as follows: - The loss function designed seems overly complicated.[issues-NEG], [EMP-NEG]",issues,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2374,"On top of that authors notice that to learn with this loss efficiently, much larger batches had to be used.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2375,"I wonder how applicable this in practice - I frankly didn't see insights here that I can apply to other problems that don't fit into this particular narrowly defined framework.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2376,"- I do find it somewhat strange that no insight to the actual problem is provided (e.g. it is known empirically but there is no explanation of what actually happens and there is a idea that it is due to local optima), but authors are concerned with developing new loss function that has provable properties about global optima.[explanation-NEG], [EMP-NEG]",explanation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2377,"Since it is all empirical, the first fix (activation function) seems sufficient to me and new loss is very far-fetched.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2378,"- It seems that changing activation function from relu to their proposed one fixes the problem without their new loss, so i wonder whether it is a problem with relu itself and may be other activations funcs, like sigmoids will not suffer from the same problem[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2379,"- No comparison with overparameterization in experiments results is given, which makes me wonder why their method is better.[comparison-NEG, method-NEU], [CMP-NEG]",comparison,method,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 2380,"Minor: fix margins in formula 2.7.[formula-NEU], [PNF-NEG]",formula,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 2387,"Originality: The paper heavily depends on the approach followed by Brutzkus and Globerson, 2017.[approach-NEU], [NOV-NEU]",approach,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 2388,"To this end, slighly novel.[novel-NEU], [NOV-NEU]",novel,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 2389,"Importance: Understanding the landscape (local vs global minima vs saddle points) is an important direction in order to further understand when and why deep neural networks work.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 2390,"I would say that the topic is an important one.[topic-POS], [IMP-POS]",topic,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2391,"Presentation/Clarity: To the best of my understanding, the paper has some misconceptions.[paper-NEG], [CLA-NEU]",paper,,,,,,CLA,,,,,NEG,,,,,,NEU,,,, 2392,"The title is not clear whether the paper considers a two layer RELU network or a single layer with with two RELU units.[clear-NEG, paper-NEU], [CLA-NEG]",clear,paper,,,,,CLA,,,,,NEG,NEU,,,,,NEG,,,, 2394,"Later on, in Section 3, the expression at the bottom of page 2 seems to consider a single-layer RELU network, with two units.[Section-NEU], [CLA-NEG]",Section,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 2395,"These are crucial for understanding the contribution of the paper; while reading the paper, I assumed that the authors consider the case of a single hidden unit with K 2 RELU activations (however, that complicated my understanding on how it compares with state of the art).[contribution-NEU], [CMP-NEU]",contribution,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2396,"Another issue is the fact that, on my humble opinion, the main text looks like a long proof.[main text-NEG], [PNF-NEG]",main text,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2397,"It would be great to have more intuitions.[intuitions-NEU], [SUB-NEU]",intuitions,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2398,"Comments: 1. The paper mainly focuses on a specific problem instance, where the weight vectors are unit-normed and orthogonal to each other.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2399,"While the authors already identify that this might be a restriction, it still does not lessen the fact that the configuration considered is a really specific one.[restriction-NEU], [EMP-NEG]",restriction,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2400,"2. The paper reads like a collection of lemmas, with no verbose connection.[lemmas-NEG], [PNF-NEG]",lemmas,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2401,"It was hard to read and understand their value, just because mostly the text was structured as one lemma after the other.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2402,"3. It is not clear from the text whether the setting is already considered in Brutzkus and Globerson, 2017.[text-NEG], [CLA-NEG]",text,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2403,"Please clarify how your work is different/new from previous works. [work-NEU, previous works-NEU], [CMP-NEG]",work,previous works,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 2407,"The authors demonstrate superior performance on a variety of benchmark problems, including those for supervised classification and for sequential decision making.[performance-POS], [EMP-POS]",performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2409,"The experiment section definitely demonstrate the effort put into this work.[experiment section-POS], [SUB-POS]",experiment section,,,,,,SUB,,,,,POS,,,,,,POS,,,, 2410,"However, my primary concern is that the model seems somewhat lacking in novelty.[model-NEG], [NOV-NEG]",model,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2411,"Namely, it interweaves the Vaswani style attention with with temporal convolutions (along with TRPO.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 2412,"The authors claim that Vaswani model does not incoporate positional information, but from my understanding, it actually does so using positional encoding.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 2413,"I also do not see why the Vaswani model cannot be lightly adapted for sequential decision making.[model-NEU], [EMP-NEG]",model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2414,"I think comparison to such a similar model would strengthen the novelty of this paper (e.g. convolution is a superior method of incorporating positional information).[comparison-NEU], [NOV-NEU, CMP-NEU]",comparison,,,,,,NOV,CMP,,,,NEU,,,,,,NEU,NEU,,, 2415,"My second concern is that the authors do not provide analysis and/or intuitions on why the proposed models outperform prior art in few-shot learning.[analysis-NEG, models-NEU], [SUB-NEG]",analysis,models,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 2416,"I think this information would be very useful to the community in terms of what to take away from this paper.[information-NEU], [IMP-NEU]",information,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 2417,"In retrospect, I wish the authors would have spent more time doing ablation studies than tackling more task domains.[ablation studies-NEU], [SUB-NEG]",ablation studies,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 2418,"Overall, I am inclined to accept this paper on the basis of its experimental results.[paper-POS, experimental results-POS], [REC-POS]",paper,experimental results,,,,,REC,,,,,POS,POS,,,,,POS,,,, 2419,"However I am willing to adjust my review according to author response and the evaluation of the experiment section by other reviewers (who are hopefully more experienced in this domain).[review-NEG, evaluation-NEU, experiment section-NEU], [REC-NEU]",review,evaluation,experiment section,,,,REC,,,,,NEG,NEU,NEU,,,,NEU,,,, 2420,"Some minor feedback/questions for the authors: - I would prefer mathematical equations as opposed to pseudocode formulation[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 2421,"- In the experiment section for Omniglot, when the authors say 1200 classes for training and 432 for testing, it sounds like the authors are performing zero-shot learning.[experiment section-NEU], [EMP-NEU]",experiment section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2422,"How does this particular model generalize to classes not seen during training?[model-NEU], [EMP-NEG]",model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2425,"Several complex but discrete control tasks, with relatively small action spaces, are cast as continuous control problems, and the task specific module is trained to produce non-linear representations of goals in the domain of transformed high-dimensional inputs.[problems-NEU, task-NEU], [EMP-NEU]",problems,task,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2426,"Pros - ""Monolithic"" policy representations can make it difficult to reuse or jointly represent policies for related tasks in the same environment; a modular architecture is hence desirable.[architecture-POS], [EMP-POS]",architecture,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2428,"- Despite all the suggestions and questions below, the method is clearly on par with standard A3C across a wide range of tasks, which makes it an attractive architecture to explore further.[method-POS], [IMP-POS, EMP-POS]",method,,,,,,IMP,EMP,,,,POS,,,,,,POS,POS,,, 2429,"Cons - In general, learning a Path function could very well turn out to be no simpler than learning a good policy for the task at hand.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2430,"I have 2 main concerns: The data required for learning a good Path function may include similar states to those visited by some optimal policy.[data-NEG], [EMP-NEG]",data,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2431,"However, there is no such guarantee for random walks; indeed, for most Atari games which have several levels, random policies don't reach beyond the first level, so I don't see how a Path function would be informative beyond the 'portions' of the state space which were visited by policies used to collect data.[data-NEU, function-NEG], [EMP-NEG]",data,function,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 2434,"How can we ensure that some optimal policy can still be represented using appropriate Goal function outputs?[policy-NEU], [EMP-NEU]",policy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2435,"I don't see this as a given in the current formulation.[current formulation-NEG], [SUB-NEG]",current formulation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2436,"- Although the method is atypical compared to standard HRL approaches, the same pitfalls may apply, especially that of 'option collapse': given a fixed Path function, the Goal function need only figure out which goal state outputs almost always lead to the same output action in the original action space, irrespective of the current state input phi(s), and hence bypass the Path function altogether; then, the role of phi(s) could be taken by tau(s), and we would end up with the original RL problem but in an arguably noisier (and continuous) action space.[method-NEG], [CMP-NEG]",method,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2439,"- The ability to use state restoration for Path function learning is actually introducing a strong extra assumption compared to standard A3C, which does not technically require it.[assumption-NEG], [EMP-NEG]",assumption,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2440,"For cheap emulators and fully deterministic games (Atari) this assumption holds, but in general restoring expensive, stochastic environments to some state is hard (e.g. robot arms playing ping-pong, ball at given x, y, z above the table, with given velocity vector).[assumption-NEG], [EMP-NEG]",assumption,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2441,"- If reported results are single runs, please replace with averages over several runs, e.g. a few random seeds.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2442,"Given the variance in deep RL training curves, it is hard to make definitive claims from single runs.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2443,"If curves are already averages over several experiment repeats, some form of error bars or variance plot would also be informative.[error bars-NEG, plot-NEG], [SUB-NEG]",error bars,plot,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 2444,"- How much data was actually used to learn the Path function in each case?[data-NEG], [SUB-NEG]",data,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2445,"If the amount is significant compared to task-specific training, then UA/A3C-L curves should start later than standard A3C curves, by that amount of data.[amount-NEU, data-NEU], [EMP-NEU]",amount,data,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2453,"The experiments on WSJ dataset are promising towards achieving a trade-off between number of parameters and accuracy.[experiments-POS, accuracy-POS], [EMP-POS]",experiments,accuracy,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2454,"I have the following questions regarding the experiments: 1. Could the authors confirm that the reported CERS are on validation/test dataset and not on train/dev data?[experiments-NEU, dataset-NEU], [SUB-NEU]",experiments,dataset,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2455,"It is not explicitly stated.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2456,"I hope it is indeed the former, else I have a major concern with the efficacy of the algorithm as ultimately, we care about the test performance of the compressed models in comparison to uncompressed model.[algorithm-NEU, performance-NEU], [EMP-NEG]",algorithm,performance,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 2457,"2. In B.1 the authors use an increasing number units in the hidden layers of the GRUs as opposed to a fixed size like in Deep Speech 2, an obvious baseline that is missing from the experiments is the comparison with *exact* same GRU (with 768, 1024, 1280, 1536 hidden units) *without any compression*.[baseline-NEU], [SUB-NEG, CMP-NEG]",baseline,,,,,,SUB,CMP,,,,NEU,,,,,,NEG,NEG,,, 2458,"3. What do different points in Fig 3 and 4 represent.[Fig-NEU], [PNF-NEU]",Fig,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2459,"What are the values of lamdas that were used to train (the l2 and trace norm regularization) the Stage 1 of models shown in Fig 4.[models-NEU, Fig-NEU], [EMP-NEU]",models,Fig,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2460,"I want to understand what is the difference in the two types of behavior of orange points (some of them seem to have good compression while other do not - it the difference arising from initialization or different choice of lambdas in stage 1.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2461,"It is interesting that although L2 regularization does not lead to low u parameters in Stage 1, the compression stage does have comparable performance to that of trace norm minimization.[performance-POS], [EMP-POS]",performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2462,"The authors point it out, but a further investigation might be interesting.[investigation-NEU], [SUB-NEU]",investigation,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2463,"Writing: 1. The GRU model for which the algorithm is proposed is not introduced until the appendix.[algorithm-NEU], [PNF-NEG]",algorithm,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 2464,"While it is a standard network, I think the details should still be included in the main text to understand some of the notation referenced in the text like ""lambda_rec"" and ""lambda_norec""[details-NEU, notation-NEU], [SUB-NEU, PNF-NEU]",details,notation,,,,,SUB,PNF,,,,NEU,NEU,,,,,NEU,NEU,,, 2471,"However, this manuscript is not polished enough for publication: it has too many language errors and imprecisions which make the paper hard to follow.[manuscript-NEG, language errors-NEG], [CLA-NEG]",manuscript,language errors,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2472,"In particular, there is no clear definition of problem formulation, and the algorithms are poorly presented and elaborated in the context.[problem formulation-NEG, algorithms-NEG], [EMP-NEG, PNF-NEG]",problem formulation,algorithms,,,,,EMP,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 2473,"Pros: - The network compression problem is of general interest to ICLR audience.[problem-POS], [IMP-POS]",problem,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2474,"Cons: - The proposed approach follows largely the existing work and thus its technical novelty is weak. [proposed approach-NEG, technical novelty-NEG], [NOV-NEG]",proposed approach,technical novelty,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 2475,"- Paper presentation quality is clearly below the standard.[presentation quality-NEG], [PNF-NEG]",presentation quality,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2476,"- Empirical results do not clearly show the advantage of the proposed method over state-of-the-arts.[Empirical results-NEG], [CMP-NEG]",Empirical results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2480,"The description of the model is laborious and hard to follow. Figure 1 helps but is only referred to at the end of the description (at the end of section 2.1), which instead explains each step without the big picture and loses the reader with confusing notation.[description-NEG], [CLA-NEG]",description,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2481,"For instance, it only became clear at the end of the section that E was learned.[section-NEG], [CLA-NEG]",section,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2483,"The assumption given in the introduction is that softmax would not yield such a representation, but nowhere in the paper this assumption is verified. [assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2484,"I believe that using cross-entropy with softmax should also push semantically similar labels to be nearby in the weight space entering the softmax.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2485,"This should at least be verified and compared appropriately.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2486,"Another motivation of the paper is that targets are given as 1s or 0s while soft targets should work better[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2487,". I believe this is true, but there is a lot of prior work on these, such as adding a temperature to the softmax, or using distillation, etc. None of these are discussed appropriately in the paper.[prior work-NEG], [CMP-NEG, EMP-NEU]",prior work,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEU,,, 2488,"Section 2.2 describes a way to compress the label embedding representation, but it is not clear if this is actually used in the experiments. h is never discussed after section 2.2.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2489,"Experiments on known datasets are interesting, but none of the results are competitive with current state-of-the-art results (SOTA), despite what is said in Appending D.[Experiments-POS, results-NEG], [CMP-NEU, EMP-NEG]",Experiments,results,,,,,CMP,EMP,,,,POS,NEG,,,,,NEU,NEG,,, 2491,". It can be fine to not be SOTA as long as it is acknowledged and discussed appropriately.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2496,"The paper is written well, good to understand, and technically sound.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2497,"I especially liked the general idea of using multiple modalities to improve embeddings of relational data.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2498,"This direction is not only interesting because of the improvements it brings for link prediction tasks, but also because it is a promising direction towards constructing commonsense knowledge knowledge graphs via grounded embeddings.[improvements-POS], [EMP-POS]",improvements,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2499,"The technical novelty of the paper is somewhat limited, as the proposed method consists of a mostly straightforward combination of existing methods.[proposed method-NEU], [NOV-NEU, EMP-NEU]",proposed method,,,,,,NOV,EMP,,,,NEU,,,,,,NEU,NEU,,, 2502,"This reference should be included in the related work.[reference-NEU, related work-NEU], [CMP-NEU]",reference,related work,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 2503,"The authors mention also in the last sentence of Section 3 that previous approaches cannot handle missing data or uncertainty.[Section-NEU, previous approaches-NEU], [CMP-NEU]",Section,previous approaches,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 2504,"This claim needs to be discussed clearer as it is not clear to me why this would be the case.[claim-NEU], [EMP-NEU]",claim,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2505,"With regard to the evaluation: Overall, I found the evaluation to be good, especially with regard to the different ablations.[evaluation-POS], [EMP-POS]",evaluation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2506,"However, it would be nice to see results for more sophisticated models than DistMult (which, due to its symmetry, shouldn't be used on directed graphs anyway) as the improvements that can be gained might be less for these models.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2507,"It would also be interesting to see how predictions using only the non-symbolic modalities would do (e.g. in Table 3).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2508,"Furthermore, Section 5.3 would clearly benefit from a better analysis and discussion, as it isn't very informative in its current form and the analysis is quite hand-wavy (e.g. two of the predicted titles for Die Hard have something to do with dying and being buried).[Section-NEU, analysis-NEU, discussion-NEU], [EMP-NEU, SUB-NEU]",Section,analysis,discussion,,,,EMP,SUB,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 2509,"Further comments: - The proposed method to incorporate numerical data seems quite ad hoc.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2511,"- Are the image features fixed or learned?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2512,"In the later case: how much do the results change with pretrained CNNs (e.g., on ImageNet).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2514,"- Since the datasets are newly introduced, it would be good to provide a more detailed analysis of their characteristics.[datasets-NEU, analysis-NEU], [SUB-NEU]",datasets,analysis,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2519,"This type of study is important to give perspective to non-standardized performance scores reported across separate publications,[study-POS], [IMP-POS]",study,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2520,"and indeed the results here are interesting as they favour relatively simpler structures.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2521,"I have a favourable impression of this paper[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 2526,"Positives - Using lower precision activations to save memory and compute seems new and widening the filter sizes seems to recover the accuracy lost due to the lower precision.[accuracy-POS], [EMP-POS]",accuracy,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2527,"Negatives - While the exhaustive analysis is extremely useful the overall technical contribution of the paper that of widening the networks is fairly small.[contribution-NEG], [SUB-NEG]",contribution,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2529,"However, the results are more focused on compute cost.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2530,"Also large batches are used mainly during training where memory is generally not a huge issue.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2532,"It might help to emphasize the speed-up in compute more in the contributions.[contributions-NEU], [EMP-NEU]]",contributions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2538,"I found the paper difficult to read.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2539,"The concrete mappings used to create the NE keys and attention keys are missing.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2540,"Providing more structure to the text would also be useful vs. long, wordy paragraphs.[text-NEU], [PNF-NEU]",text,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2544,"The authors should include the exact model specification, including for the HRED model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2546,"Is there a guarantee that a same named entity, appearing later in the dialog, will be given the same key?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2547,"Or are the keys for already found entities retrieved directly, by value?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2548,"3. In the decoding phase, how does the system decide whether to query the DB?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2549,"4. How is the model trained?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2550,"In its current form, it's not clear how the proposed approach tackles the shortcomings mentioned in the introduction.[proposed approach-NEU], [EMP-NEG]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2551,"Furthermore, while the highlighted contribution is the named entity table, it is always used in conjunction to the database approach.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2552,"This raises the question whether the named entity table can only work in this context.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2553,"For the structured QA task, there are 400 training examples, and 100 named entities. This means that the number of training examples per named entity is very small.[training examples-NEU], [SUB-NEG]",training examples,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 2555,"If yes, then it's not very surprising that adding the named entities to the vocabulary leads to overfitting.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2556,"Have you compared with using random embeddings for the named entities?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2557,"Typos: page 2, second-to-last paragraph: firs -> first, page 7, second to last paragraph: and and -> and.[Typos-NEG], [CLA-NEG]",Typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2560,"The model is shown to improve on the published results but not as-of-submission leaderboard numbers.[model-POS], [IMP-NEU]",model,,,,,,IMP,,,,,POS,,,,,,NEU,,,, 2561,"The main weakness of the paper in my opinion is that the innovations seem to be incremental and not based on any overarching insight or general principle.[innovations-NEU], [EMP-NEU]",innovations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2562,"A less significant issue is that the English is often disfluent.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 2563,"Specific comments: I would remove the significance daggers from table 2 as the standard deviations are already reported and the null hypothesis for which significance is measured seems unclear.[table-NEU], [EMP-NEG, PNF-NEU]",table,,,,,,EMP,PNF,,,,NEU,,,,,,NEG,NEU,,, 2564,"I am also concerned to see test performance significantly better than development performance in table 3.[table-NEU], [EMP-NEU]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2565,"Other systems seem to have development and test performance closer together. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2566,"Have the authors been evaluating many times on the test data?[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 2574,"Two local minima are observed: 1) the network ignores stucture and guesses if the task is solvable by aggregate statistics[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2575,"2) it works as described above but propagates the rechable region on a checkerboard only.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2576,"The paper is chiefly concerned with analysing these local minima by expanding the cost function about them.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2577,"This analysis is hard to follow for non experts graph theory.[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2578,"This is partly because many non-trivial results are mentioned with little or no explanation.[results-NEG, explanation-NEG], [SUB-NEG]",results,explanation,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 2579,"The paper is hard to evaluate.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2580,"The actual setup seems somewhat arbitrary,[setup-NEG], [EMP-NEU]",setup,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 2581,"but the method of analysing the failure modes is interesting.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2582,"It may inspire more useful research in the future.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 2583,"If we trust the authors, then the paper seems good because it is fairly unusual.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 2584,"But it is hard to determine whether the analysis is correct.[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2588,"Clarity, Significance and Correctness -------------------------------------------------- Clarity: Excellent[Clarity-POS], [CLA-POS]",Clarity,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2589,"Significance: I'm not familiar with the literature of differential privacy, so I'll let more knowledgeable reviewers evaluate this point.[Significance-NEU], [IMP-NEU]",Significance,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 2590,"Correctness: The paper is technically correct.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2591,"Questions -------------- 1. Figure 1: Adding some noise to the updates could be view as some form of regularization, so I have trouble understand why the models with noise are less efficient than the baseline.[Figure-NEU, models-NEG, baseline-NEU], [EMP-NEG]",Figure,models,baseline,,,,EMP,,,,,NEU,NEG,NEU,,,,NEG,,,, 2592,"2. Clipping is supposed to help with the exploding gradients problem.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2593,"Do you have an idea why a low threshold hurts the performances?[performances-NEU], [EMP-NEU]",performances,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2594,"Is it because it reduces the amplitude of the updates (and thus simply slows down the training)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2595,"3. Is your method compatible with other optimizers, such as RMSprop or ADAM (which are commonly used to train RNNs)?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2596,"Pros ------ 1. Nice extensions to FederatedAveraging to provide privacy guarantee.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2597,"2. Strong experimental setup that analyses in details the proposed extensions.[experimental setup-POS], [EMP-POS]",experimental setup,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2598,"3. Experiments performed on public datasets.[Experiments-POS, public datasets-POS], [EMP-POS]",Experiments,public datasets,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2600,"Typos -------- 1. Section 2, paragraph 3 : is given in Figure 1 -> is given in Algorithm 1 [Section-NEU, paragraph-NEU, Figure-NEU, Algorithm-NEU], [PNF-NEG]",Section,paragraph,Figure,Algorithm,,,PNF,,,,,NEU,NEU,NEU,NEU,,,NEG,,,, 2601,"Note ------- Since I'm not familiar with the differential privacy literature, I'm flexible with my evaluation based on what other reviewers with more expertise have to say.[null], [REC-NEU]",null,,,,,,REC,,,,,,,,,,,NEU,,,, 2605,"The main idea of using upper bound (as opposed to lower bound) is reasonable.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2606,"However, I find there are some limitations/weakness of the proposed method: 1. The method is likely not extendable to more complicated and more practical networks, beyond the ones discussed in the paper (ie with one hidden layer) 2.[limitations-NEG, method-NEG], [EMP-NEG, IMP-NEG]",limitations,method,,,,,EMP,IMP,,,,NEG,NEG,,,,,NEG,NEG,,, 2607,"SDP while tractable, would still require very expensive computation to solve exactly.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2608,"3. The relaxation seems a bit loose - in particular, in above step 2 and 3, the authors replace the gradient value by a global upper bound on that, which to me seems can be pretty loose.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2612,"- A lot of important references touching on very similar ideas are missing.[references-NEG], [SUB-NEG, CMP-NEG]",references,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 2614,"- This paper has a lot of orthogonal details.[details-NEG], [EMP-NEG]",details,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2615,"For instance sec 2.1 reviews the history of games and AI, which is besides the key point and does not provide any literary context.[sec-NEG, literary context-NEG], [CMP-NEG]",sec,literary context,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 2616,"- Only single runs for the results are shown in plots.[results-NEG], [SUB-NEG]",results,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2617,"How statistically valid are the results?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2618,"- In the last section authors mention the intent to do future work on atari and other env.[last section-NEU], [IMP-NEU]",last section,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 2619,"Given that this general idea has been discussed in the literature several times, it seems imperative to at least scale up the experiments before the paper is ready for publication[idea-NEU, literature-NEU, experiments-NEU], [REC-NEU, EMP-NEU]",idea,literature,experiments,,,,REC,EMP,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 2625,"The paper revisits mostly familiar ideas.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2626,"The importance of preserving both local and global information in manifold learning is well known, so unclear what the main conceptual novelty is.[conceptual novelty-NEG], [NOV-NEG]",conceptual novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2627,"This reviewer does not believe that modifying the loss function of a well established previous method that is over 10 years old (DrLIM) constitutes a significant enough contribution.[contribution-NEG], [REC-NEG]",contribution,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 2628,"Moreover, in this reviewer's experience, the major challenge is to obtain proper estimates of the geodesic distances between far-away points on the manifold, and such an estimation is simply too difficult for any reasonable dataset encountered in practice.[challenge-NEG, estimation-NEG], [EMP-NEG]",challenge,estimation,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 2629,"However, the authors do not address this, and instead simply use the Isomap approach for approximating geodesics by graph distances, which opens up a completely different set of challenges (how to construct the graph, how to deal with holes in the manifold, how to avoid short circuiting in the all-pairs shortest path computations etc etc).[approach-NEG, challenges-NEG], [SUB-NEG, EMP-NEG]",approach,challenges,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 2630,"Finally, the experimental results are somewhat uninspiring.[experimental results-NEG], [EMP-NEG]",experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2631,"It seems that the proposed method does roughly as well as Landmark Isomap (with slightly better generalization properties) but is slower by a factor of 1000x.[proposed method-NEG, slower-NEG], [EMP-NEG]",proposed method,slower,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 2632,"The horizon articulation data, as well as the pose articulation data, are both far too synthetic to draw any practical conclusions.[data-NEG, conclusions-NEG], [EMP-NEG]]",data,conclusions,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 2637,"Overall, I thought the paper was clearly written and extremely easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2638,"To the best of my knowledge, the method proposed by the authors is novel, and differs from traditional sentence generation (as an example) models because it is intended to produce continuous domain outputs.[method proposed-POS, models-POS], [NOV-POS, CMP-POS]",method proposed,models,,,,,NOV,CMP,,,,POS,POS,,,,,POS,POS,,, 2639,"Furthermore, the story of generating medical training data for public release is an interesting use case for a model like this, particularly since training on synthetic data appears to achieve not competitive but quite reasonable accuracy, even when the model is trained in a differentially private fashion.[accuracy-POS], [EMP-POS]",accuracy,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2640,"My most important piece of feedback is that I think it would be useful to include a few examples of the eICU time series data, both real and synthetic.[examples-NEG], [SUB-NEG]",examples,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2642,"Are the synthetic time series clearly multimodal, or do they display some of the mode collapse behavior occasionally seen in GANs?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2643,"I would additionally like to see a few examples of the time series data at both the 5 minute granularity and the 15 minute granularity.[examples-NEG], [SUB-NEG]",examples,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2644,"You claim that downsampling the data to 15 minute time steps still captures the relevant dynamics of the data -- is it obvious from the data that variations in the measured variables are not significant over a 5 minute interval?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2645,"As it stands, this is somewhat an unknown, and should be easy enough to demonstrate.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2648,"I think the paper is borderline, leaning towards accept.[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 2649,"I do want to note my other concerns: I suspect the theoretical results obtained here are somewhat restricted to the least-squares, autoencoder loss.[theoretical results-NEG], [EMP-NEG]",theoretical results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2650,"And note that the authors show that the proposed algorithm performs comparably to SGD, but not significantly better.[algorithm-POS], [EMP-NEG]",algorithm,,,,,,EMP,,,,,POS,,,,,,NEG,,,, 2651,"The classification result (Table 1) was obtained on the autoencoder features instead of training a classifier on the original inputs.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2652,"So it is not clear if the proposed algorithm is better for training the classifier, which may be of more interest.[algorithm-NEU], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2653,"This paper presents an algorithm for training deep neural networks.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2654,"Instead of computing gradient of all layers and perform updates of all weight parameters at the same time, the authors propose to perform alternating optimization on weights of individual layers.[null], [EMP-NEU, NOV-NEU]",null,,,,,,EMP,NOV,,,,,,,,,,NEU,NEU,,, 2655,"The theoretical justification is obtained for single-hidden-layer auto-encoders.[justification-POS], [EMP-NEU]",justification,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 2656,"Motivated by recent work by Hazan et al 2015, the authors developed the local-quasi-convexity of the objective w.r.t. the hidden layer weights for the generalized RELU activation.[recent work-NEU], [EMP-NEU]",recent work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2657,"As a result, the optimization problem over the single hidden layer can be optimized efficiently using the algorithm of Hazan et al 2015.[algorithm-POS], [EMP-NEU]",algorithm,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 2658,"This itself can be a small, nice contribution.[contribution-POS], [EMP-POS, IMP-POS]",contribution,,,,,,EMP,IMP,,,,POS,,,,,,POS,POS,,, 2659,"What concerns me is the extension to multiple layers.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2660,"Some questions are not clear from section 3.4: 1.[questions-NEG, section-NEU], [CLA-NEG]",questions,section,,,,,CLA,,,,,NEG,NEU,,,,,NEG,,,, 2661,"Do we still have local-quasi-convexity for the weights of each layer, when there are multiple nonlinear layers above it?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2662,"A negative answer to this question will somewhat undermine the significance of the single-hidden-layer result.[significance-NEU, result-NEG], [IMP-NEG]",significance,result,,,,,IMP,,,,,NEU,NEG,,,,,NEG,,,, 2663,"2. Practically, even if the authors can perform efficient optimization of weights in individual layers, when there are many layers, the alternating optimization nature of the algorithm can possibly result in overall slower convergence.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2664,"Also, since the proposed algorithm still uses gradient based optimizers for each layer, computing the gradient w.r.t. lower layers (closer to the inputs) are still done by backdrop, which has pretty much the same computational cost of the regular backdrop algorithm for updating all layers at the same time.[algorithm-NEG], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 2665,"As a result, I am not sure if the proposed algorithm is on par with / faster than the regular SGD algorithm in actual runtime.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2666,"In the experiments, the authors plotted the training progress w.r.t. the minibatch iterations, I do not know if the minibatch iteration is a proxy for actual runtime (or number of floating point operations).[experiments-NEU], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2667,"3. In the experiments, the authors found the network optimized by the proposed algorithm generalize better than regular SGD.[experiments-NEU, algorithm-NEU], [EMP-NEU]",experiments,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2668,"Is this result consistent (across dataset, random initializations, etc), and can the authors elaborate the intuition behind? [result-NEU], [SUB-NEU, EMP-NEU]",result,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 2670,"This paper misses the point of what VAEs (or GANs, in general) are used for.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2671,"The idea of using VAEs is not to encode and decode images (or in general any input), but to recover the generating process that created those images so we have an unlimited source of samples.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2672,"The use of these techniques for compressing is still unclear and their quality today is too low.[techniques-NEG, quality-NEG], [EMP-NEG]",techniques,quality,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 2673,"So the attack that the authors are proposing does not make sense and my take is that we should see significant changes before they can make sense.[changes-NEG], [EMP-NEG]",changes,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2674,"But let's assume that at some point they can be used as the authors propose.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2675,"In which one person encodes an image, send the latent variable to a friend, but a foe intercepts it on the way and tampers with it so the receiver recovers the wrong image without knowing.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2676,"Now if the sender believes the sample can be tampered with, if the sender codes z with his private key would not make the attack useless?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2677,"I think this will make the first attack useless.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2678,"The other two attacks require that the foe is inserted in the middle of the training of the VAE.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2679,"This is even less doable, because the encoder and decoder are not train remotely.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2680,"They are train of the same machine or cluster in a controlled manner by the person that would use the system.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2681,"Once it is train it will give away the decoder and keep the encoder for sending information.[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2682,"The idea is clearly stated (but lacks some details) and I enjoyed reading the paper.[idea-POS, paper-POS], [CLA-POS, SUB-NEG]",idea,paper,,,,,CLA,SUB,,,,POS,POS,,,,,POS,NEG,,, 2684,"and the proposed scheme but I could not understand in which situation the proposed scheme works better.[proposed scheme-NEU], [EMP-NEG]",proposed scheme,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2685,"From the adversary's standpoint, it would be easier to manipulate inputs than latent variables.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2686,"On the other hand, I agree that sample-independent perturbation is much more practical than sample-dependent perturbation.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2687,"In Section 3.1, the attack methods #2 and #3 should be detailed more.[Section-NEG], [SUB-NEG]",Section,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2688,"I could not imagine how VAE and T are trained simultaneously.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2690,"How were these loss functions are combined?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2691,"The final optimization problem that is used for training of the propose VAE should be formally defined.[problem-NEU], [SUB-NEU]",problem,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2692,"Also, the detailed specification of the VAE should be detailed.[detailed specification-NEU], [SUB-NEU]",detailed specification,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2693,"From figures in Figure 4 and Figure 5, I could see that the proposed scheme performs successfully in a qualitative manner,[figures-NEU, proposed scheme-POS], [EMP-POS]",figures,proposed scheme,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 2694,"however, it is difficult to evaluate the proposed scheme qualitatively without comparisons with baselines.[evaluate-NEU, comparisons-NEG], [CMP-NEG]",evaluate,comparisons,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 2696,"or some other sample-dependent attacks?[proposed scheme-NEU], [CMP-NEU]",proposed scheme,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2697,"Also, can you experimentally show that attacks on latent variables are more powerful than attacks on inputs? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2699,"The paper adds few operations after the pipeline for obtaining visual concepts from CNN as proposed by Wang et al. (2015).[paper-POS], [NOV-POS]",paper,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2700,"This latter paper showed how to extract from a CNN some clustered representations of the features of the internal layers of the network, working on a large training dataset.[paper-POS, training dataset-POS], [EMP-POS]",paper,training dataset,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2704,". The results a are convincing, even if they are not state of the art in all the trials.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2705,"The paper is very easy to follows, and the results are explained in a very simple way.[paper-POS, results-POS], [CLA-POS, PNF-POS]",paper,results,,,,,CLA,PNF,,,,POS,POS,,,,,POS,POS,,, 2706,"Few comments: The authors in the abstract should revise their claims, too strong with respect to a literature field which has done many advancements on the cnn interpretation (see all the literature of Andrea Vedaldi) and the literature on zero shot learning, transfer learning, domain adaptation and fine tuning in general.[abstract-NEG, literature-NEG], [EMP-NEG, SUB-NEG]",abstract,literature,,,,,EMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 2708,"This submission does not fit ICLR.[submission-NEG], [APR-NEG]",submission,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 2709,"- The center topic does not fit ICLR[center topic-NEG], [APR-NEG]",center topic,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 2710,". The main novelty is about using word pair embedding to improve the Topic model[novelty-NEU], [NOV-NEU]",novelty,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 2713,"- No clear novelty[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2714,"- The experimental setup is problematic[experimental setup-NEG], [EMP-NEG]",experimental setup,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2715,". The authors filtered the number of words and word-pairs to very small[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2717,". - The baselines are not thorough and lack proper justifications[baselines-NEG], [SUB-NEG]",baselines,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2718,". - The experimental results are not properly presented, with many overlapping figures[experimental results-NEG, figures-NEG], [PNF-NEG]",experimental results,figures,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 2719,". No insights can be derived from the presented results.[results-NEG], [IMP-NEG]",results,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 2724,"While the idea may be novel and interesting,[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 2725,"its motivation is not clear for me.[motivation-NEG], [EMP-NEG]",motivation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2726,"Is it for space? for speed? for expressivity of hypothesis spaces?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2728,"This means that they bring all necessary information for rebuilding their continuous counterpart.[information-NEU], [SUB-NEU]",information,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2729,"Hence, it is not clear why projecting them back into continuous functions is of interest.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2730,"Another point that is not clear or at least misleading is the so-called Hilbert Maps.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2733,"A proper embedding would have mapped $x$ into a function belonging to $mH$.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2734,"In addition, it seems that all computations are done into a ell^2 space instead of in the RKHS (equations 5 and 11).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2736,"and Equations (6) and (7) corresponds to learning these similarity functions.[Equations-NEG], [EMP-NEG]",Equations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2737,"As far as I remember, there exists also some paper from the nineties that learn the parameters of RBF networks but unfortunately I have not been able to google some of them.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2738,"Part 3 is the most interesting part of the paper,[Part-POS], [EMP-POS]",Part,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2739,"however it would have been great if the authors provide other kernel functions with closed-form convolution formula that may be relevant for learning.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 2740,"The proposed methodology is evaluated on some standard benchmarks in vision.[proposed methodology-NEU], [EMP-NEU]",proposed methodology,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2741,"While results are pretty good,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2742,"it is not clear how the various cluster sets have been obtained and what are their influence on the performances (if they are randomly initialized, it would be great to see standard deviation of performances with respect to initializations).[performances-NEU], [EMP-NEG]",performances,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 2743,"I would also be great to have intuitions on why a single continuous filter works betters than 20 discrete ones (if this behaviour is consistent accross initialization).[intuitions-NEU], [CMP-NEU]",intuitions,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2744,"On the overall, while the idea may be of interested,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2745,"the paper lacks in motivations in connecting to relevant previous works and in providing insights on why it works.[motivations-NEG], [EMP-NEG]",motivations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2746,"However, performance results seem to be competitive and that's the reader may be eager for insights.[performance results-POS], [EMP-POS]",performance results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2747,"minor comments --------------- * the paper employs vocabulary that is not common in ML.[vocabulary-NEU], [EMP-NEU]",vocabulary,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2749,"* Supposingly that the authors properly consider computation in RKHS, then Sigma_i should be definite positive right?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2750,"how update in (7) is guaranteed to be DP?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2751,"This constraints may not be necessary if instead they used proximity space representation.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2761,"The theoretical analysis in the paper is straightforward, in some sense following from the definition.[theoretical analysis-NEU], [EMP-NEU]",theoretical analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2762,"The contribution of the paper is to posit these two conditions which can predict the existence of universal fooling perturbations, argue experimentally that they occur in (some) neural networks of practical interest.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2763,"One challenge in assessing the experimental claims is that practical neural networks are nonsmooth; the quadratic model developed from the hessian is only valid very locally.[experimental claims-NEU], [EMP-NEU]",experimental claims,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2764,"This can be seen in some of the illustrative examples in Figure 5: there *is* a coarse-scale positive curvature, but this would not necessarily come through in a quadratic model fit using the hessian.[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2765,"The best experimental evidence for the authors' perspective seems to be the fact that random perturbations from S_c misclassify more points than random perturbations constructed with the previous method.[experimental evidence-NEU], [EMP-POS]",experimental evidence,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 2766,"I find the topic of universal perturbations interesting, because it potentially tells us something structural (class-independent) about the decision boundaries constructed by artificial neural networks. [topic-POS], [IMP-POS]",topic,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2767,"To my knowledge, the explanation of universal perturbations in terms of positive curvature is novel.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 2768,"The paper would be much stronger if it provided an explanation of *why* there exists this common subspace of universal fooling perturbations, or even what it means geometrically that positive curvature obtains at every data point.[explanation-NEU], [EMP-NEU]",explanation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2769,"Visually, these perturbations seem to have strong, oriented local high-frequency content u2014 perhaps they cause very large responses in specific filters in the lower layers of a network, and conventional architectures are not robust to this? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2770,"It would also be nice to see some visual representations of images perturbed with the new perturbations, to confirm that they remain visually similar to the original images.[visual representations-NEU], [CMP-NEU, PNF-NEU]",visual representations,,,,,,CMP,PNF,,,,NEU,,,,,,NEU,NEU,,, 2772,"This is a very well-written paper that shows how to successfully use (generative) autoencoders together with the (discriminative) domain adversarial neural network (DANN) of Ganin et al.[paper-POS], [CLA-POS, CMP-POS]",paper,,,,,,CLA,CMP,,,,POS,,,,,,POS,POS,,, 2773,"The construction is simple but nicely backed by a probabilistic analysis of the domain adaptation problem.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2774,"The only criticism that I have towards this analysis is that the concept of shared parameter between the discriminative and predictive model (denoted by zeta in the paper) disappear when it comes to designing the learning model.[concept-NEG], [EMP-NEG]",concept,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2776,"They successfully show that using autoencoder can help to learn a good representation for discriminative domain adaptation tasks.[tasks-POS], [EMP-POS]",tasks,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2777,"On the downside, all these experiments concern predictive (discriminative) problems.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2778,"Given the paper title, I would have expected some experiments in a generative context.[paper-NEG, experiments-NEG], [SUB-NEG]",paper,experiments,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 2779,"Also, a comparison with the Generative Adversarial Networks of Goodfellow et al. (2014) would be a plus.[comparison-NEU], [SUB-NEU, CMP-NEU]",comparison,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 2780,"I would also like to see the results obtained using DANN stacked on mSDA representations, as it is done in Ganin et al. (2016).[results-NEU], [SUB-NEU]",results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2781,"Minor comments: - Paragraph below Equation 6: The meaning of $phi(psi)$ is unclear[Paragraph-NEG, Equation-NEG, meaning-NEG], [PNF-NEG, CLA-NEG]",Paragraph,Equation,meaning,,,,PNF,CLA,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 2782,"- Equation (7): phi and psi seems inverted[Equation-NEG], [PNF-NEG]",Equation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2783,"- Section 4: The acronym MLP is used but never defined.[Section-NEG], [PNF-NEG]",Section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2784,"update I lowered my score and confidence, see my new post below.[score-NEG, confidence-NEG], [REC-NEG]]",score,confidence,,,,,REC,,,,,NEG,NEG,,,,,NEG,,,, 2785,"Post Rebuttal I went through the rebuttal, which unfortunately claimed a number statements without any experimental support as requested.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2786,"The revision didn't address my concerns, and I've lowered my rating.[rating-NEG], [REC-NEG]",rating,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 2790,"Other than that, the paper has limited technical innovations: the pooling functions were proposed earlier and their integration with MIL was widely studied before (as cited by the authors); the attention mechanisms are also proposed by others.[technical innovations-NEG], [NOV-NEU]",technical innovations,,,,,,NOV,,,,,NEG,,,,,,NEU,,,, 2791,"However, I am doubtful whether it's appropriate to use LSTM to model the relations among instances.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2792,"In general MIL, there exists no temporal order among instances, so modeling them with a LSTM is unjustified.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2793,"It might be acceptable is the authors are focusing on time-series data; but in this case, it's unclear why the authors are applying MIL on it.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2794,"It seems other learning paradigm could be more appropriate.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2795,"The biggest concern I have with this paper is the unconvincing experiments.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2796,"First, the baselines are very weak. [baselines-NEG], [EMP-NEG]",baselines,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2797,"Both MISVM and DPMIL are MIL methods without using deep learning features.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2798,"It them becomes very unclear how much of the gain on Table 3 is from the use of deep learning, and how much is from the proposed RMIL.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2799,"Also, although the authors conducted a number of ablation studies, they don't really tell us much.[ablation studies-NEG], [SUB-NEG]",ablation studies,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2800,"Basically, all variants of the algorithm perform as well, so it's confusing why we need so many of them, or whether they can be integrated as a better model.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2801,"This could also be due to the small dataset.[dataset-NEG], [SUB-NEG]",dataset,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 2802,"As the authors are proposing a new MIL learning paradigm, I feel they should experiment on a number of MIL tasks, not limited to analyzing time series medical data.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 2803,"The current experiments are quite narrow in terms of scope. [experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2809,"I am very disappointed in the authors' choice of evaluation, namely bAbI - a toy, synthetic task long abandoned by the NLP community because of its lack of practicality.[evaluation-NEG], [EMP-NEG]",evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2810,"If the authors would like to demonstrate question answering on long documents, they have the luxury of choosing amongst several large scale, realistic question answering datasets such as the Stanford Question answering dataset or TriviaQA.[datasets-NEU], [SUB-NEU]",datasets,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2811,"Beyond the problem of evaluation, the model the authors propose does not provide new ideas, and rather merges existing ones. This, in itself, is not a problem[model-NEG], [NOV-NEG]",model,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2812,". However, the authors decline to cite many, many important prior work. For example, the tuple extraction described by the authors has significant prior work in the information retrieval community (e.g. knowledge base population, relation extraction). [prior work-NEG], [CMP-NEG]",prior work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2813,"The idea of generating programs to query over populated knowledge bases, again, has significant related work in semantic parsing and program synthesis.[idea-NEU, related work-NEU], [NOV-NEU]",idea,related work,,,,,NOV,,,,,NEU,NEU,,,,,NEU,,,, 2814,"Question answering over (much more complex) probabilistic knowledge graphs have been proposed before as well (in fact I believe Matt Gardner wrote his entire thesis on this topic).[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 2815,"Finally, textual question answering (on realistic datasets) has seen significant breakthroughs in the last few years.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 2816,"Non of these areas, with the exception of semantic parsing, are addressed by the author.[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 2817,"With sufficient knowledge of related works from these areas, I find that the authors' proposed method lacks proper evaluation and sufficient novelty.[related works-NEG, evaluation-NEG, novelty-NEG], [NOV-NEG, IMP-NEG]",related works,evaluation,novelty,,,,NOV,IMP,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 2822,"This seems too simple to really be the right way to add background knowledge.[background knowledge-NEG], [EMP-NEG]",background knowledge,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2823,"In practice, the biggest win of this paper turns out to be that you can get quite a lot of value by sharing contextualized word representations between all words with the same lemma (done by linguistic preprocessing;[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2824,"the paper never says exactly how, not even if you read the supplementary material).[paper-NEG, supplementary material-NEG], [SUB-NEG]",paper,supplementary material,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 2825,"This seems a useful observation which it would be easy to apply everywhere and which shows fairly large utility from a bit of linguistically sensitive matching![observation-POS], [EMP-POS]",observation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2826,"As the paper notes, this type of sharing is the main delta in this paper from simply using a standard deep LSTM (which the paper claims to not work on these data sets, though I'm not quite sure couldn't be made to work with more tuning).[paper-NEG, data sets-NEG], [SUB-NEG, EMP-NEG]",paper,data sets,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 2828,"A note on the QA results: The QA results are certainly good enough to be in the range of good systems,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2829,"but none of the results really push the SOTA.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2830,"The best SQuAD (devset) results are shown as several percent below the SOTA.[results-NEG], [CMP-NEG]",results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2831,"In the table the TriviaQA results are shown as beating the SOTA, and that's fair wrt published work at the time of submission,[results-POS, published work-POS], [EMP-POS]",results,published work,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2832,"but other submissions show that all of these results are below what you get by running the DrQA (Chen et al. 2017) system off-the-shelf on TriviaQA, so the real picture is perhaps similar to SQuAD, especially since DrQA is itself now considerably below the SOTA on SQUAD.[submissions-NEG, results-NEG], [CMP-NEG]",submissions,results,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 2833,"Similar remarks perhaps apply to the NLI results.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2834,"p.7 In the additional NLI results, it is interesting and valuable to note that the lemmatization and knowledge help much more when amounts of data (and the covarying dimensionality of the word vectors) is much smaller,[results-POS, data-POS], [EMP-POS]",results,data,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2835,"but the fact that the ideas of this paper have quite little (or even negative) effects when run on the full data with full word vectors on top of the ESIM model again draws into question whether enough value is being achieved from the world knowledge.[ideas-NEG, data-NEG, value-NEG], [EMP-NEG]",ideas,data,value,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 2836,"Biggest question: - Are word embeddings powerful enough as a form of memory to store the kind of relational facts that you are accessing as background knowledge?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 2837,"Minor notes: - The paper was very well written/edited. [paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 2838,"The only real copyediting I noticed was in the conclusion: and be used u2794 and can be used; that rely on u2794 that relies on.[conclusion-POS], [PNF-POS]",conclusion,,,,,,PNF,,,,,POS,,,,,,POS,,,, 2839,"- Should reference to (Manning et al. 1999) better be to (Manning et al. 2008) since the context here appears to be IR systems?[reference-NEU], [CMP-NEU]",reference,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 2840,"- On p.3 above sec 3.1: What is u?[p-NEG, sec-NEG], [CLA-NEG]",p,sec,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 2841,"Was that meant to be z?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 2842,"- On p.8, I'm a bit suspicious of the Is additional knowledge used?[p-NEG], [EMP-NEU]",p,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 2843,"experiment which trains with knowledge and then tests without knowledge.[experiment-NEG], [EMP-NEG]",experiment,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2844,"It's not surprising that this mismatch might hurt performance, even if the knowledge provided no incremental value over what could be gained from standard word vectors alone.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2845,"- In the supplementary material the paper notes that the numbers are from the best result from 3 runs.[supplementary material-POS, paper-POS, result-POS], [EMP-POS]",supplementary material,paper,result,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 2846,"This seems to me a little less good experimental practice than reporting an average of k runs, preferably for k a bit bigger than 3.[experimental practice-POS], [PNF-POS]]",experimental practice,,,,,,PNF,,,,,POS,,,,,,POS,,,, 2848,"for this the paper shows that there is a mismatch between the gaussian prior and an estimated of the latent codes of real data by reversal of the generator .[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2851,"Quality/clarity: The paper is well written and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2852,"Originality: pros: -The paper while simple sheds some light on important problem with the prior distribution used in GAN.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 2853,"- the second GAN solution trained on reverse codes from real data is interesting [solution-POS], [EMP-POS]",solution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2854,"- In general the topic is interesting, the solution presented is simple but needs more study[topic-POS, solution-NEU], [SUB-NEU, EMP-NEU]",topic,solution,,,,,SUB,EMP,,,,POS,NEU,,,,,NEU,NEU,,, 2856,"- The solution presented is not end to end (learning a prior generator on learned models have been done in many previous works on encoder/decoder)[solution-NEG], [NOV-NEG]",solution,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2857,"General Review: More experimentation with the latent codes will be interesting:[experimentation-NEU], [SUB-NEU]",experimentation,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2858,"- Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2860,"how does this change depending on the dimensionality of the latent codes?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2861,"Maybe adding plots to the paper can help.[paper-NEU], [PNF-NEU]",paper,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2862,"- the prior agreement score is interesting but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate.[data-NEU], [EMP-NEU]",data,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2863,"Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2864,"- Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2865,"Maybe also rotating the codes with the singular vector matrix V or Sigma^{0.5} V?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2866,"- What architecture did you use for the prior generator GAN?[architecture-NEU], [EMP-NEU]",architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2867,"- Have you thought of an end to end way to learn the prior generator GAN?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2870,"This should be the first work which introduces in the causal structure into the GAN, to solve the label dependency problem.[work-NEU], [NOV-POS]",work,,,,,,NOV,,,,,NEU,,,,,,POS,,,, 2871,"The idea is interesting and insightful.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2872,"The proposed method is theoretically analyzed and experimentally tested.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2873,"Two minor concerns are 1) what is the relationship between the anti-labeler and and discriminator?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2874,"2) how the tune related weight of the different objective functions. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2881,"Comments for the author: The paper addresses an important problem since understanding object interactions are crucial for reasoning.[problem-POS], [IMP-POS]",problem,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2882,"However, how widespread is this problem across other models or are you simply addressing a point problem for RN?[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2885,"The relationship network considers all pair-wise interactions that are replaced by a two-hop attention mechanism (and an MLP).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2886,"It would not be fair to claim superiority over RN since you only evaluate on bABi while RN also demonstrated results on other tasks.[results-NEU], [CMP-NEG, EMP-NEU, SUB-NEU]",results,,,,,,CMP,EMP,SUB,,,NEU,,,,,,NEG,NEU,NEU,, 2887,"For more complex tasks (even over just text), it is necessary to show that you outperform RN w/o considering all objects in a pairwise fashion.[tasks-NEU], [CMP-NEU, EMP-NEU]",tasks,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 2888,"More specifically, RN uses an MLP over pair-wise interactions, does that allow it to model more complex interactions than just selecting two hops to generate attention weights.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2889,"Showing results with multiple hops (1,2,..) would be useful here.[results-NEU], [SUB-NEU]",results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 2890,"More details are needed about Figure 3.[details-NEU, Figure-NEU], [SUB-NEU]",details,Figure,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2892,"How did you generate these stories with so many sentences?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2893,"Another clarification is the bAbi performance over Entnet which claims to solve all tasks.[clarification-NEU], [EMP-NEU]",clarification,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2894,"Your results show 4 failed tasks, is this your reproduction of Entnet?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2895,"Finally, what are the savings from reducing this time complexity?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2896,"Some wall clock time results or FLOPs of train/test time should be provided since you use multiple hops.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2897,"Overall, this paper feels like a small improvement over RN.[paper-NEU], [IMP-NEU]",paper,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 2898,"Without experiments over other datasets and wall clock time results, it is hard to appreciate the significance of this improvement.[experiments-NEG, datasets-NEG], [SUB-NEG, IMP-NEU]",experiments,datasets,,,,,SUB,IMP,,,,NEG,NEG,,,,,NEG,NEU,,, 2899,"One direction to strengthen this paper is to examine if RMN can do better than pair-wise interactions (and other baselines) for more complex reasoning tasks.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2902,"This model does not use any recurrent operation but it is not per se simpler than a recurrent model.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2903,"Furthermore, the authors proposed an interesting idea to augment additional training data by paraphrasing based on off-the-shelf neural machine translation.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2904,"On SQuAD dataset, their results show some small improvements using the proposed augmentation technique.[results-POS, improvements-POS], [EMP-NEU]",results,improvements,,,,,EMP,,,,,POS,POS,,,,,NEU,,,, 2905,"Their best results, however, do not outperform the best results reported on the leader board.[best results-NEG], [EMP-NEG]",best results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2906,"Overall, this is an interesting study on SQuAD dataset.[study-POS], [IMP-POS]",study,,,,,,IMP,,,,,POS,,,,,,POS,,,, 2907,"I would like to see results on more datasets and more discussion on the data augmentation technique.[results-NEU, discussion-NEU], [SUB-NEU]",results,discussion,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 2908,"At the moment, the description in section 3 is fuzzy in my opinion. [description-NEG, section-NEU], [PNF-NEG]",description,section,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 2909,"Interesting information could be: - how is the performance of the NMT system?[performance-NEU], [PNF-NEU]",performance,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2910,"- how many new data points are finally added into the training data set?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2911,"- what do 'data aug' x 2 or x 3 exactly mean? [null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 2913,"The interesting paper provides theoretical support for the low-dimensional vector embeddings computed using LSTMs or simple techniques, using tools from compressed sensing.[paper-POS, tools-NEU], [EMP-POS]",paper,tools,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 2914,"The paper also provides numerical results to support their theoretical findings.[paper-POS, numerical results-POS, theoretical findings-POS], [EMP-POS]",paper,numerical results,theoretical findings,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 2915,"The paper is well presented and organized.[paper-POS], [PNF-POS]",paper,,,,,,PNF,,,,,POS,,,,,,POS,,,, 2916,"-In theorem 4.1, the embedding dimension $d$ is depending on $T^2$, and it may scale poorly with respect to $T$.[theorem-NEG], [EMP-NEG]]",theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2918,"Pros The paper addresses an important application of deep networks, comparing the performance of a variety of different types of model architectures.[paper-POS, performance-POS], [EMP-POS]",paper,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2919,"The tested networks seem to perform reasonably well on the task.[networks-POS], [EMP-POS]",networks,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2920,"Cons There is little novelty in the proposed method/models -- the paper is primarily focused on comparing existing models on a new task.[proposed method-NEG], [NOV-NEG]",proposed method,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2921,"The descriptions of the different architectures compared are overly verbose -- they are all simple standard convnet / RNN architectures.[descriptions-NEG], [CMP-NEG]",descriptions,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2922,"The code specifying the models is also excessive for the main text -- it should be moved to an appendix or even left for a code release.[main text-NEG, appendix-NEU], [PNF-NEG]",main text,appendix,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 2923,"The comparisons between various architectures are not very enlightening as they aren't done in a controlled way -- there are a large number of differences between any pair of models so it's hard to tell where the performance differences come from.[comparisons-NEG], [CMP-NEG]",comparisons,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 2924,"It's also difficult to compare the learning curves among the different models (Fig 1) as they are in separate plots with differently scaled axes.[Fig-NEG, plots-NEG], [PNF-NEG]",Fig,plots,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 2925,"The proposed problem is an explicitly adversarial setting and adversarial examples are a well-known issue with deep networks and other models, but this issue is not addressed or analyzed in the paper.[proposed problem-NEU, issue-NEG], [EMP-NEG, SUB-NEG]",proposed problem,issue,,,,,EMP,SUB,,,,NEU,NEG,,,,,NEG,NEG,,, 2926,"(In fact, the intro claims this is an advantage of not using hand-engineered features for malicious domain detection, seemingly ignoring the literature on adversarial examples for deep nets.) For example, in this case an attacker could start with a legitimate domain name and use black box adversarial attacks (or white box attacks, given access to the model weights) to derive a similar domain name that the models proposed here would classify as benign.[intro-NEU, models proposed-NEG], [EMP-NEG]",intro,models proposed,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 2927,"While this paper addresses an important problem, in its current form the novelty and analysis are limited and the paper has some presentation issues.[paper-NEG, presentation issues-NEG], [NOV-NEG, PNF-NEG]",paper,presentation issues,,,,,NOV,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 2930,"The paper presents a well laid research methodology that shows a good decomposition of the problem at hand and the approach foreseen to solve it.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2931,"It is well reflected in the paper and most importantly the rationale for the implementation decisions taken is always clear.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2932,"The results obtained (as compared to FHEW) seem to indicate well thought off decisions taken to optimize the different gates' operations as clearly explained in the paper.[results-POS], [CLA-POS, EMP-POS]",results,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 2933,"For example, reducing bootstrapping operations by two-complementing both the plaintext and the ciphertext, whenever the number of 1s in the plain bit-string is greater than the number of 0s (3.4/Page 6).[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 2934,"Result interpretation is coherent with the approach and data used and shows a good understanding of the implications of the implementation decisions made in the system and the data sets used.[Result-POS], [EMP-POS]",Result,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2935,"Overall, fine work, well organized, decomposed, and its rationale clearly explained.[work-POS], [CLA-POS]",work,,,,,,CLA,,,,,POS,,,,,,POS,,,, 2936,"The good results obtained support the design decisions made.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2937,"Our main concern is regarding thorough comparison to similar work and provision of comparative work assessment to support novelty claims.[comparison-NEG, novelty-NEG], [NOV-NEG, CMP-NEG, SUB-NEG]",comparison,novelty,,,,,NOV,CMP,SUB,,,NEG,NEG,,,,,NEG,NEG,NEG,, 2938,"Nota: - In Figure 4/Page 4: AND Table A(1)/B(0), shouldn't A And B be 0?[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2939,"- Unlike Figure 3/Page 3, in Figure 2/page 2, shouldn't operations' precedence prevail (No brackets), therefore 1+2*2 5?[Figure-NEU, Page-NEU], [PNF-NEU]",Figure,Page,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 2943,"The experimental results seem promising, but the presentation can be improved.[experimental results-POS, presentation-POS], [PNF-NEU]",experimental results,presentation,,,,,PNF,,,,,POS,POS,,,,,NEU,,,, 2944,"Some parts of the paper are hard to read.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2946,"2. What is the intuition in adding target cluster entropy in Eq. 3?[Eq-NEU], [EMP-NEU]",Eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2947,"3. In the adaptive cluster, I am a bit confused on the target of the parametric models. Where are X, Y of P(X|X*), P(Y|Y*) from?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2948,"Is it from pretrained models? It wasn't clear until I read the algorithm. Also, why are p(X|X*) called target cluster and P(Y|Y*) called source cluster?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2949,"4. In section 4.2, the name cluster is a bit confusing with the one in section 3.1. What's the relationship? The symbols C(Y*) and C(X*) are not used afterward.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2950,"5. In the conclusion, it claims the system is efficient in helping current model. What do you mean by efficient?[conclusion-NEU], [EMP-NEU]",conclusion,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2951,"6. The improvements of WMT are relatively small. Does it mean the proposed methods are not beneficial when there are large amounts of sentence pairs?[improvements-NEU, proposed methods-NEU], [EMP-NEU]",improvements,proposed methods,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 2953,"? 8. In the Monte-Carlo sampling, how many pairs are sampled? [proposed methods-NEU], [EMP-NEU]",proposed methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2954,"Minor 1. In Table 1, where is sigma defined?[Table-NEU], [PNF-NEU]",Table,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 2955,"2. The notation D for a dataset in Section 3.3 is confusing with D in system D.[notation-NEG], [PNF-NEG]",notation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2956,"3. There is some redundancy between Systems A, B, C, D and in the algorithm 1. I wonder whether it can be simplified.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2957,"4. In section 4.3, backward NMT (X|Y) -> backward NMT P(X|Y).[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2962,"n I think this paper has a good and strong point: this work points out the difficulties in choosing properly the parameters in a HMC method (such as the step and the number of iterations in the leapfrog integrator, for instance).[paper-POS, work-POS], [EMP-POS]",paper,work,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 2964,"If I have understood, your method is an adaptive HMC algorithm where the parameters are updated online; or is the training done in advance? Please, remark and clarify this point.[method-NEG], [CLA-NEG]",method,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 2965,"However, I have other additional comments: - Eqs. (4) and (5) are quite complicated; I think a running toy example can help the interested reader.[Eqs-NEG], [PNF-NEG]",Eqs,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 2966,"- I suggest to compare the proposed method to other efficient methods that do not use the gradient information (in some cases as multimodal posteriors, the use of the gradient information can be counter-productive for sampling purposes), such as Multiple Try Metropolis (MTM) schemes[proposed method-NEU], [CMP-NEG]",proposed method,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 2967,"L. Martino, J. Read, On the flexibility of the design of Multiple Try Metropolis schemes, Computational Statistics, Volume 28, Issue 6, Pages: 2797-2823, 2013, adaptive techniques, [null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2968,"H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7(2):223u2013242, April 2001,[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2969,"and component-wise strategies as Gibbs Sampling, W. R. Gilks and P. Wild, Adaptive rejection sampling for Gibbs sampling, Appl. Statist., vol. 41, no. 2, pp. 337u2013348, 199.u2028[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 2970,"At least, add a brief paragraph in the introduction citing and discussing this possible alternatives.[paragraph-NEU, introduction-NEG, alternatives-NEG], [CMP-NEG, SUB-NEG]]",paragraph,introduction,alternatives,,,,CMP,SUB,,,,NEU,NEG,NEG,,,,NEG,NEG,,, 2973,"This is clearly an application paper.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2974,"No new method is being proposed, only existing methods are applied directly to the task.[method-NEG, existing methods-NEG], [NOV-NEG]",method,existing methods,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 2975,"I am not familiar with the task at hand so I cannot properly judge the quality/accuracy of the results obtained but it seems ok.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 2977,"Given the amount of data 2*10**6 samples, this seems sufficient.[data-POS], [EMP-POS]",data,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2978,"I think the evaluation could be improved by using malware URLs that were obtained during a larger time window.[evaluation-NEG], [EMP-NEG]",evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 2979,"Specifically, it would be nice if train, test and validation URLs would be operated chronologically. I.e. all train url precede the validation and test urls.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 2980,"Ideally, the train and test urls would also be different in time.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2981,"This would enable a better test of the generalization capabilities in what is essentially a continuously changing environment.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 2982,"This paper is a very difficult for me to assign a final rating.[paper-NEU], [REC-NEU]",paper,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 2983,"There is no obvious technical mistake and the paper is written reasonably well.[technical mistake-POS, paper-POS], [CLA-POS]",technical mistake,paper,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 2984,"There is however a lack of technical novelty or insight in the models themselves.[models-NEG], [NOV-NEG]",models,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 2985,"I think that the paper should be submitted to a journal or conference in the application domain where it would be a better fit.[journal-NEG, conference-NEG], [APR-NEG]",journal,conference,,,,,APR,,,,,NEG,NEG,,,,,NEG,,,, 2986,"For this reason, I will give the score marginally below the acceptance threshold now.[score-NEG], [REC-NEG]",score,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 2995,"Pros: - The problem is relevant and also appears in similar form in domain adaptation and transfer learning.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2996,"- The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al.[earlier work-POS], [EMP-POS]",earlier work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 2997,"Cons: - I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others.[venue-NEG], [APR-NEG]",venue,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 2998,"- The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail.[manuscript-NEG, passages-NEG, detail-NEG], [SUB-NEG]",manuscript,passages,detail,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 2999,"Especially the second half of page 5 is at times very hard to understand as it is so dense.[page-NEG], [PNF-NEG]",page,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3000,"- The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_Phi, C^mathcal{F}_{n,delta} and D^{Phi,mathcal{H}}_delta.[assumptions-NEG, Theorem-NEG], [EMP-NEG]",assumptions,Theorem,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3001,"Why would we expect these quantities to be small or bounded?[quantities-NEG], [EMP-NEG]",quantities,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3002,"How does that compare to the assumptions needed for standard inverse probability weighting?[assumptions-NEU], [EMP-NEU]",assumptions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3003,"- I appreciate that it is difficult to find good test datasets for evaluating causal estimator.[datasets-NEU], [EMP-NEU]",datasets,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3004,"The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?).[experiment-POS, information-POS, manuscript-POS], [EMP-POS]",experiment,information,manuscript,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 3005,"The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.[example-NEG, ideas-NEG], [SUB-NEG]]",example,ideas,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 3007,"I think the approach is interesting and warrants publication.[approach-POS], [REC-POS]",approach,,,,,,REC,,,,,POS,,,,,,POS,,,, 3008,"However, I think some of the counter-intuitive claims on the proposal learning are overly strong, and not supported well enough by the experiments.[claims-NEG, experiments-NEG], [EMP-NEG]",claims,experiments,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3009,"In the paper the authors also need to describe the differences between their work and the concurrent work of Maddison et al. and Naesseth et al.[differences-NEG], [CMP-NEG]",differences,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3014,"The approach is interesting and the paper is well-written,[approach-POS, paper-POS], [CLA-POS, EMP-POS]",approach,paper,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 3015,"however, I have some comments and questions: - It seems clear that the AESMC bound does not in general optimize for q(x|y) to be close to p(x|y), except in the IWAE special case.[questions-NEU], [EMP-NEG]",questions,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3016,"This seems to mean that we should not expect for q -> p when K increases?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3017,"- Figure 1 seems inconclusive and it is a bit difficult to ascertain the claim that is made.[Figure-NEG, claim-NEG], [EMP-NEG]",Figure,claim,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3018,"If I'm not mistaken K 1 is regular ELBO and not IWAE/AESMC?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3019,"Have you estimated the probability for positive vs. negative gradient values for K 10?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3020,"To me it looks like the probability of it being larger than zero is something like 2/3.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3021,"K>10 is difficult to see from this plot alone.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3022,"- Is there a typo in the bound given by eq. (17)?[typo-NEG], [CLA-NEU]",typo,,,,,,CLA,,,,,NEG,,,,,,NEU,,,, 3023,"Seems like there are two identical terms.[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 3024,"Also I'm not sure about the first equality in this equatiion, is I^2 0 or is there a typo?[typo-NEU], [CLA-NEG]",typo,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 3025,"- The discussion in section 4.1 and results in the experimental section 5.2 seem a bit counter-intuitive, especially learning the proposals for SMC using IS.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3026,"Have you tried this for high-dimensional models as well?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3028,"For example have you tried learning proposals for the LG-SSM in Section 5.1 using the IS objective as proposed in 4.1?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3029,"Might this be a typo in 4.1?[typo-NEU], [CLA-NEU]",typo,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 3030,"You still propose to learn the proposal parameters using SMC but with lower number of particles?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3031,"I suspect this lower number of particles might be model-dependent.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3032,"Minor comments: - Section 1, first paragraph, last sentence, that -> than?[Section-NEG, first paragraph-NEU, last sentence-NEU], [PNF-NEG]",Section,first paragraph,last sentence,,,,PNF,,,,,NEG,NEU,NEU,,,,NEG,,,, 3033,"- Section 3.2, ... using which... formulation in two places in the firsth and second paragraph was a bit confusing[Section-NEU], [CLA-NEG]",Section,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 3034,"- Page 7, second line, just IS?[Page-NEU, second line-NEU], [CLA-NEG]",Page,second line,,,,,CLA,,,,,NEU,NEU,,,,,NEG,,,, 3035,"- Perhaps you can clarify the last sentence in the second paragraph of Section 5.1 about computational graph not influencing gradient updates?[second paragraph-NEU, Section-NEU], [CLA-NEG]",second paragraph,Section,,,,,CLA,,,,,NEU,NEU,,,,,NEG,,,, 3036,"- Section 5.2, stochastic variational inference Hoffman et al. (2013) uses natural gradients and exact variational solution for local latents so I don't think K 1 reduces to this?[Section-NEU], [EMP-NEG]",Section,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3039,"The paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose.[paper-POS, previous methods-NEU], [CLA-POS, CMP-POS]",paper,previous methods,,,,,CLA,CMP,,,,POS,NEU,,,,,POS,POS,,, 3040,"The experiments are very clearly presented and solidly designed.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3041,"In fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 3042,"Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods.[proposed method-NEG, results-NEU], [NOV-NEG]",proposed method,results,,,,,NOV,,,,,NEG,NEU,,,,,NEG,,,, 3043,"n Finally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm.[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3044,"However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ($b_j^t in Q^{n_l}$).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3045,"This may not be relevant for the practical results, but to be accurate, it can't be simply stated that the algorithm converges, without a more careful analysis.[results-NEU, algorithm-NEU, analysis-NEU], [EMP-NEG]",results,algorithm,analysis,,,,EMP,,,,,NEU,NEU,NEU,,,,NEG,,,, 3050,"In general, I think the paper is written clearly and in detail.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3051,"Some typos and minor issues are listed in the Cons part below.[typos-NEG, minor issues-NEG], [PNF-NEG]",typos,minor issues,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 3052,"Pros: The authors lead a very nice exploration into the binary nets in the paper, from the most basic analysis on the converging angle between original and binarized weight vectors, to how this convergence could affect the weight-activation dot product, to pointing out that binarization affects differently on the first layer.[exploration-POS], [EMP-POS]",exploration,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3053,"Many empirical and theoretical proofs are given, as well as some practical tricks that could be useful for diagnosing binary nets in the future.[proofs-POS, tricks-POS], [EMP-POS, IMP-POS]",proofs,tricks,,,,,EMP,IMP,,,,POS,POS,,,,,POS,POS,,, 3054,"Cons: * it seems that there are quite some typos in the paper, for example: 1. Section 1, in the second contribution, there are two thens.[typos-NEG, Section-NEG], [PNF-NEG]",typos,Section,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 3055,"2. Section 1, the citation format of Bengio et al. (2013) should be (Bengio et al. 2013).[Section-NEG], [PNF-NEG]",Section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3056,"* Section 2, there is an ordering mistake in introducing Han et al.'s work, DeepComporession actually comes before the DSD.[Section-NEG], [PNF-NEG]",Section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3057,"* Fig 2(c), the correlation between the theoretical expectation and angle distribution from (b) seems not very clear.[null], [CLA-NEG, EMP-NEG]",null,,,,,,CLA,EMP,,,,,,,,,,NEG,NEG,,, 3058,"* In appendix, Section 5.1, Lemma 1. Could you include some of the steps in getting g(row) to make it clearer?[Section-NEG, appendix-NEG], [CLA-NEG, SUB-NEG]",Section,appendix,,,,,CLA,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 3059,"I think the length of the proof won't matter a lot since it is already in the appendix, but it makes the reader a lot easier to understand it.[appendix-NEU], [PNF-NEU]]",appendix,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3060,"UPDATED COMMENT I've improved my score to 6 to reflect the authors' revisions to the paper and their response to my and R2's comments.[score-NEU, revisions-NEU, paper-NEU, response-NEU, comments-NEU], [REC-NEU]",score,revisions,paper,response,comments,,REC,,,,,NEU,NEU,NEU,NEU,NEU,,NEU,,,, 3061,"I still think the work is somewhat incremental, but they have done a good job of exploring the idea (which is nice).[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3064,"The resulting architecture outperforms vanilla deep nets and sometimes approaches the performance of ResNets.[architecture-POS], [CMP-POS]",architecture,,,,,,CMP,,,,,POS,,,,,,POS,,,, 3065,"It's a nice, simple idea.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3066,"However, I don't think it's sufficient for acceptance.[acceptance-NEG], [REC-NEG]",acceptance,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 3067,"Unfortunately, this seems to be a simple idea that doesn't work as well as the simpler idea (ResNets) that inspired it.[idea-NEG], [EMP-NEG]",idea,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3068,"Moreover, the experiments are weak in two senses: (i) there are lots of obvious open questions that should have been explored and closed, see below,[questions-NEG], [EMP-NEG, SUB-NEG]",questions,,,,,,EMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 3069,"and (ii) the results just aren't that good.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3070,"Comments: 1. Why force the Lag. multipliers to 1 at the end of training?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3071,"It seems easy enough to treat the alphas as just more parameters to optimize with gradient descent.[parameters-NEU], [EMP-NEU]",parameters,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3072,"I would expect the resulting architecture to perform at least as well as variable action nets.[resulting architecture-NEU], [EMP-NEU]",resulting architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3073,"If not, I'd be curious as to why.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3074,"2.Similarly, it's not obvious that initializing the multipliers at 0.5 is the best choice.[choice-NEG], [EMP-NEG]",choice,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3075,"The ""looks linear"" initialization proposed in ""The shattered gradients problem"" (Balduzzi et al) implies that alpha 0 may work better.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 3076,"Did the authors try any values besides 0.5?[values-NEU], [EMP-NEU]",values,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3077,"3. The final paragraph of the paper discusses extending the approach to architectures with skip-connections.[paragraph-NEU, approach-NEU, architectures-NEU], [EMP-NEU]",paragraph,approach,architectures,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 3078,"Firstly, it's not clear to me what this would add, since the method is already interpolating in some sense between vanilla and resnets.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3079,"Secondly, why not just do it?[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3081,"The idea of employing ensemble of classifiers is smart and effective.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3082,"I am curious about the efficiency of the method.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3083,"The experimental study is extensive.[experimental study-NEU], [EMP-NEU]",experimental study,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3084,"Results are well discussed with reasonable observations.[Results-POS, observations-POS], [EMP-POS, SUB-POS]",Results,observations,,,,,EMP,SUB,,,,POS,POS,,,,,POS,POS,,, 3085,"In addition to examining the effectiveness, authors also performed experiments to explain why OPTMARGIN is superior.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3086,"Authors are suggested to involve more datasets to validate the effectiveness of the proposed method.[proposed method-NEU], [SUB-NEU, EMP-NEU]",proposed method,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 3087,"Table 5 is not very clear.[Table-NEG], [PNF-NEG]",Table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3088,"Authors are suggested to discuss in more detail. [detail-NEU], [SUB-NEU]",detail,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 3092,"This paper touches on many interesting issues -- deep/recurrent models of time series, privacy-respecting ML, adaptation from simulated to real-world domains.[paper-POS, models-POS], [EMP-POS]",paper,models,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3093,"But it is somewhat unfocused and does not seem make a clear contribution to any of these.[contribution-NEG], [EMP-NEG]",contribution,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3094,"The recurrent GAN architecture does not appear particularly novel --- the authors note that similar architectures have been used for discrete tasks such language modeling (and fail to note work that uses convolutional or recurrent generators for video prediction, a more relevant continuous task, see e.g. http://carlvondrick.com/tinyvideo/, or autoregressive approaches to deep models of time series, e.g. WaveNet https://arxiv.org/abs/1609.03499) and there is no obvious new architectural innovation.[architecture-NEG, architectural innovation-NEG], [NOV-NEG]",architecture,architectural innovation,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 3095,"I also find it difficult to assess whether the proposed model is actually generating reasonable time series.[proposed model-NEG], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3096,"It may be true that one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data because it could not rule out that case that the model has captured the marginal distribution in each dimension but not joint structure.[information-NEG], [EMP-NEG]",information,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3097,"However producing marginal distributions that look reasonable is at least a *necessary* condition and without seeing those plots it is hard to rule out that the model may be producing highly unrealistic samples.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3101,"But the results in Table 2 show that the TSTR results are quite a lot worse than real data in most cases, and it's not obvious that the small set of tasks evaluated are representative of all tasks people might care about.[results-NEG, Table-NEG], [EMP-NEG]",results,Table,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3102,"The attempts to demonstrate empirically that the GAN does not memorize training data aren't particularly convincing; this is an adversarial setting so the fact that a *particular* test doesn't reveal private data doesn't imply that a determined attacker wouldn't succeed.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3103,"In this vein, the experiments with DP-u000fSGD are more interesting,[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3104,"although a more direct comparison would be helpful (it is frustrating to flip back and forth between Tables 2 and 3 in an attempt to tease out relative performance) and and it is not clear how the settings (u03b5 u000fu000fu000f 0.5 and u03b4 u2264 9.8 u00d7 10u22123) were selected or whether they provide a useful level of privacy.[comparison-NEG, Tables-NEG], [EMP-NEG]",comparison,Tables,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3105,"That said I agree this is an interesting avenue for future work.[future work-POS], [IMP-POS]",future work,,,,,,IMP,,,,,POS,,,,,,POS,,,, 3106,"Finally it's worth noting that discarding patients with missing data is unlikely to be innocuous for ICU applications; data are quite often not missing at random (e.g., a patient going into a seizure may dislocate a sensor).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3107,"It appears that the analysis in this paper threw out more than 90% of the patients in their original dataset, which would present serious concerns in using the resulting synthetic data to represent the population at large.[analysis-NEG, paper-NEG], [SUB-NEG, EMP-NEG]",analysis,paper,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3108,"One could imagine coding missing data in various ways (e.g. asking the generator to produce a missingness pattern as well as a time series and allowing the discriminator to access only the masked time series, or explicitly building a latent variable model) and some sort of principled approach to missing data seems crucial for meaningful results on this application. [principled approach-NEG, results-NEG], [SUB-NEG, EMP-NEG]]",principled approach,results,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3113,"While the paper is reasonably clearly written and easy to read[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3114,"I have a number of objections to it.[objections-NEG], [CLA-NEG]",objections,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3115,"First, I did not see any novel idea presented in this paper.[novel idea-NEG, paper-NEG], [NOV-NEG]",novel idea,paper,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 3119,"Unless I have missed something completely, I did not see any novel idea proposed in this paper.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 3120,"Second, the experiments are quite underwhelming and does not fully support the superiority claims of the proposed approach.[experiments-NEG, proposed approach-NEG], [EMP-NEG]",experiments,proposed approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3121,"For example, the authors compare their model against rather weak baselines.[models-NEU, baselines-NEG], [CMP-NEG]",models,baselines,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 3122,"While the approach (as has been shown in the past) is very reasonable,[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3123,"I would have liked the experiments to be more thorough, with comparison to the state of the art models for the two datasets.[experiments-NEG], [EMP-NEG]]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3126,"The paper is mostly clear and well-presented,[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 3127,"except for two issues: 1) there is virtually nothing novel presented in the first half of the paper (before Section 3.3);[first half-NEG, paper-NEG], [NOV-NEG]",first half,paper,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 3128,"and 2) the actual learning step is only presented on page 6, making it hard to understand the motivation behind the guide actor until very late through the paper. The presented method itself seems to be an important contribution, even if the results are not overwhelmingly positive.[page-NEG, motivation-NEG, paper-NEG, presented method-NEG, contribution-NEG, results-NEG], [EMP-NEG]",page,motivation,paper,presented method,contribution,results,EMP,,,,,NEG,NEG,NEG,NEG,NEG,NEG,NEG,,,, 3129,"It'd be interesting to see a more elaborate analysis of why it works well in some domains but not in others.[analysis-NEG], [SUB-NEG]",analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3130,"More trials are also needed to alleviate any suspicion of lucky trials.[trials-NEG], [SUB-NEG]",trials,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3131,"There are some other issues with the presentation of the method, but these don't affect the merit of the method:[presentation-NEG], [PNF-NEG]",presentation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3133,"While this makes sense in well-mixing domains, the experiment domains are not well-mixing for most policies during training, for example a fallen humanoid will not get up on its own, and must be reset.[domains-NEG], [EMP-NEG]",domains,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3134,"2. The definition of beta(a|s) as a mixture of past actors is inconsistent with the sampling method, which seems to be a mixture of past trajectories.[definition-NEG], [EMP-NEG]",definition,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3137,"What else does it depend on?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3141,"the action a_0 should be similar to actions sampled from pi_theta(a|s). What do you mean should?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 3142,"In order for the Taylor approximation to be good?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3143,"4. The line before (19) is confusing, since (19) is exact and not an approximation.[line-NEG], [CLA-NEG]",line,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3144,"For the approximation (20), it isn't clear if this is a good approximation.[approximation-NEG], [EMP-NEG, CLA-NEG]",approximation,,,,,,EMP,CLA,,,,NEG,,,,,,NEG,NEG,,, 3145,"Why/when is the 2nd term in (19) small?[term-NEU], [EMP-NEU]",term,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3146,"5. The parametrization nu of hat{Q} is never specified in Section 3.6.[Section-NEG], [EMP-NEG, CLA-NEG]",Section,,,,,,EMP,CLA,,,,NEG,,,,,,NEG,NEG,,, 3148,"The authors clearly describe the problem being addressed in the manuscript and motivate their solution very clearly.[problem-POS, solution-POS], [EMP-POS]",problem,solution,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3149,"The proposed solution seems very intuitive and the empirical evaluations demonstrates its utility. [proposed solution-POS, empirical evaluations-POS], [EMP-POS]",proposed solution,empirical evaluations,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3150,"My main concern is the underlying assumption (if I understand correctly) that the adversarial attack technique that the detector has to handle needs to be available at the training time of the detector.[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3151,"Especially since the empirical evaluations are designed in such a way where the training and test data for the detector are perturbed with the same attack technique.[empirical evaluations-NEU], [EMP-NEU]",empirical evaluations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3153,"Specific comments/questions: - (Minor) Page 3, Eq 1: I think the expansion dimension cares more about the probability mass in the volume rather than the volume itself even in the Euclidean setting.[Page-NEG, Eq-NEG], [EMP-NEG]",Page,Eq,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3154,"- Section 4: The different pieces of the problem (estimation, intuition for adversarial subspaces, efficiency) are very well described.[Section-POS], [EMP-POS]",Section,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3155,"- Alg 1, L3: Is this where the normal exmaples are converted to adversarial examples using some attack technique?[Alg-NEG, technique-NEU], [EMP-NEG]",Alg,technique,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 3156,"- Alg 1, L12: Is LID_norm computed using a leave-one-out estimate?[Alg-NEU], [EMP-NEU]",Alg,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3157,"Otherwise, r_1(.) for each point is 0, leading to a somewhat under-estimate of the true LID of the normal points in the training set.[training set-NEU], [EMP-NEU]",training set,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3158,"I understand that it is not an issue in the test set.[issue-NEU], [EMP-NEU]",issue,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3159,"- Section 4 and Alg 1: S we do not really care about the labels/targets of the examples.[Section-NEU, Alg-NEU], [EMP-NEU]",Section,Alg,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3160,"All examples in the dataset are considered ormal to start with.[examples-NEU, dataset-NEU], [EMP-NEU]",examples,dataset,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3161,"Is this assuming that the initial training set which is used to obtain the pre-trained DNN free of adversarial examples?[training set-NEU], [EMP-NEU]",training set,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3162,"- Section 5, Experimental Setup: Seems like normal points in the test set would get lesser values if we are not doing the leave-one-out version of the estimation.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3163,"- Section 5: The authors have done a great job at evaluating every aspect of the proposed method. [Section-POS], [EMP-POS]",Section,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3171,"Unfortunately there is minimal quantitative evaluation (visualizing 264 MNIST samples is not enough).[quantitative evaluation-NEG], [SUB-NEG]",quantitative evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3172,"The only quantitative evaluation is in Table 1, and it seems the model is not able to generalize reliably to all rotations and all digits.[quantitative evaluation-NEG, Table-NEG, model-NEG], [EMP-NEG]",quantitative evaluation,Table,model,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 3173,"Clearly, we can't expect perfect performance, but there are some troubling results: 5.2 accuracy on non-rotated 0s, 0.0 accuracy on non-rotated 6s.[results-NEG, accuracy-NEG], [EMP-NEG]",results,accuracy,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3174,"Every digit has at least one rotation that is not well classified, so this section could use more discussion and analysis.[section-NEG, discussion-NEG, analysis-NEG], [EMP-NEG]",section,discussion,analysis,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 3175,"For example, how would this metric classify VAE samples with contexts corresponding only to digit type (no rotations)? How would this metric classify vanilla VAE samples that are hand labeled?[metric-NEU], [EMP-NEU]",metric,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3176,"Moreover, the context selection variable a should be considered part of the dataset, and as such the paper should report how a was selected.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3177,"This model is a relatively simple extension of the Neural Statistician, so the novelty of the idea is not enough to counterbalance the lack of quantitative evaluation.[model-NEG, idea-NEG, quantitative evaluation-NEG], [NOV-NEG, SUB-NEG]",model,idea,quantitative evaluation,,,,NOV,SUB,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 3178,"I do think the idea is well-motivated, and represents a promising way to incorporate prior knowledge of concepts into our training of VAEs.[idea-POS], [CMP-POS, EMP-POS]",idea,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 3179,"Still, the paper as it stands is not complete, and I encourage the authors to followup with more thorough quantitative empirical evaluations.[paper-NEG, empirical evaluations-NEG], [SUB-NEG, EMP-NEG]]",paper,empirical evaluations,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3181,"While the motivation of the paper makes sense, the model is not properly justified, and I learned very little after reading the paper.[motivation-POS, model-NEG], [CLA-NEG]",motivation,model,,,,,CLA,,,,,POS,NEG,,,,,NEG,,,, 3183,"For all the experiments, the same set of parameters are used, and it is claimed that ""the method is robust in our experiment and simply works without fine tuning"".[experiments-POS], [EMP-NEU]",experiments,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 3184,"While I agree that a robust and fine-tuning-free model is ideal 1) this has to be justified by experiment. 2) showing the experiment with different parameters will help us understand the role each component plays.[model-NEU, experiment-NEU], [SUB-NEU, EMP-NEU]",model,experiment,,,,,SUB,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 3185,"This is perhaps more important than improving the baseline method by a few point, especially given that the goal of this work is not to beat the state-of-the-art.[work-NEU], [CMP-NEU]",work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3188,"However, the experiments are too weak to demonstrate the effectiveness of using discrete representations.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3189,"The design of the experiments on language model is problematic.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3190,"There are a few interesting points about discretizing the represenations by saturating sigmoid and gumbel-softmax, but the lack of comparisons to benchmarks is a critical defect of this paper.[comparisons-NEG, benchmarks-NEU], [CMP-NEG]",comparisons,benchmarks,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 3191,"Generally, continuous vector representations are more powerful than discrete ones, but discreteness corresponds to some inductive biases that might help the learning of deep neural networks, which is the appealing part of discrete representations, especially the stochastic discrete representations.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3192,"However, I didn't see the intuitions behind the model that would result in its superiority to the continuous counterpart. [intuitions-NEU], [EMP-NEU]",intuitions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3193,"The proposal of DSAE might help evaluate the usage of the 'autoencoding function' c(s), but it is certainly not enough to convince people.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3194,"How is the performance if c(s) is replaced with the representations achieved from autoencoder, variational autoencoder or simply the sentence vectors produced by language model?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3195,"The qualitative evaluation on 'Deciperhing the Latent Code' is not enough either.[qualitative evaluation-NEG], [SUB-NEG]",qualitative evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3196,"In addition, the language model part doesn't sound correct, because the model cheated on seeing the further before predicting the words autoregressively.[model-NEU], [EMP-NEG]",model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3197,"One suggestion is to change the framework to variational auto-encoder, otherwise anything related to perplexity is not correct in this case.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3198,"Overall, this paper is more suitable for the workshop track[paper-NEU], [APR-NEU]",paper,,,,,,APR,,,,,NEU,,,,,,NEU,,,, 3204,"The idea seems interesting.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3205,"However, I think there are several main drawbacks, detailed as follows: 1. The paper lacks a coherent and complete review of the semi-supervised deep learning.[paper-NEG, review-NEG], [SUB-NEG]",paper,review,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 3206,"Herewith some important missing papers, which are the previous or current state-of-the-art.[papers-NEG], [SUB-NEG, CMP-NEG]",papers,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 3213,"Besides, some papers should be mentioned in the related work such as Kingma et. al. 2014.[papers-NEG, related work-NEG], [SUB-NEG, CMP-NEG]",papers,related work,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3214,"I'm not an expert of the network inversion and not sure whether the related work of this part is sufficient or not.[related work-NEG], [SUB-NEG, CMP-NEG]",related work,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 3215,"2. The motivation is not sufficient and not well supported.[motivation-NEG], [EMP-NEG]",motivation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3216,"As stated in the introduction, the authors think there are several drawbacks of existing methods including training instability, lack of topology generalization and computational complexity.[introduction-NEU, drawbacks-NEG, methods-NEU, computational complexity-NEG], [CMP-NEG]",introduction,drawbacks,methods,computational complexity,,,CMP,,,,,NEU,NEG,NEU,NEG,,,NEG,,,, 3220,"mentioned above are efficient and not too sensitive with respect to the network architectures.[approaches-POS], [EMP-POS]",approaches,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3221,"Overall, I think the drawbacks mentioned in the paper are not common in existing methods and I do not see clear benefits of the proposed method.[drawbacks-NEG, method-NEG], [CMP-NEG]",drawbacks,method,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 3222,"Again, I strongly suggest the authors to provide a complete review of the literature.[review-NEG], [SUB-NEG]",review,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3223,"Further, please explicitly support your claim via experiments.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3225,"in terms of the training efficiency.[proposed method-NEG, approaches-NEG], [CMP-NEG, SUB-NEG]",proposed method,approaches,,,,,CMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 3226,"It's not fair to say GAN-based methods require more training time because these methods can do generation and style-class disentanglement while the proposed method cannot.[methods-NEU, proposed method-NEG], [CMP-NEG]",methods,proposed method,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 3227,"3. The experimental results are not so convincing.[experimental results-NEG], [EMP-NEG]",experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3228,"First, please systematically compare your methods with existing methods on the widely adopted benchmarks including MNIST with 20, 100 labels and SVHN with 500, 1000 labels and CIFAR10 with 4000 labels.[methods-NEG, existing methods-NEG], [CMP-NEG]",methods,existing methods,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 3229,"It is not safe to say the proposed method is the state-of-the-art by only showing the results in one setting.[proposed method-NEG, results-NEG], [CMP-NEG]",proposed method,results,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 3230,"Second, please report the results of the proposed method with comparable architectures used in previous methods and state clearly the number of parameters in each model.[results-NEG, proposed method-NEG, previous methods-NEG, model-NEU], [CMP-NEG, SUB-NEG]",results,proposed method,previous methods,model,,,CMP,SUB,,,,NEG,NEG,NEG,NEU,,,NEG,NEG,,, 3231,"Resnet is powerful but previous methods did not use that.[previous methods-NEG], [CMP-NEG]",previous methods,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3232,"Last, show the sensitive results of the proposed method by tuning alpha and beta.[results-NEU, proposed method-NEU], [EMP-NEU]",results,proposed method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3233,"For instance, please show what is the actual contribution of the proposed reconstruction loss to the classification accuracy with the other losses existing or not?[contribution-NEG, classification accuracy-NEU], [SUB-NEU]",contribution,classification accuracy,,,,,SUB,,,,,NEG,NEU,,,,,NEU,,,, 3234,"I think the quality of the paper should be further improved by addressing these problems and currently it should be rejected.[paper-NEG, problems-NEG], [CLA-NEG, PNF-NEG, REC-NEG]]",paper,problems,,,,,CLA,PNF,REC,,,NEG,NEG,,,,,NEG,NEG,NEG,, 3237,"There is a nice variety of authors and words, though I question if even with all those books, the corpus is big enough to produce meaningful vectors.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3239,"It is hard to believe that meaningful results are achieved using such a small dataset with random initialization.[results-NEG, dataset-NEG], [EMP-NEG, SUB-NEG]",results,dataset,,,,,EMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 3240,"I think table 5 is also a bit strange.[table-NEG], [PNF-NEG]",table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3241,"If the rank is > 1000 I wonder how meaningful it actually is.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3243,"It seems that table 1 is the only evaluation of the proposed method against any other type of method (glove, which is not a tensor-based method).[table-NEU, evaluation-NEU, proposed method-NEU], [EMP-NEG]",table,evaluation,proposed method,,,,EMP,,,,,NEU,NEU,NEU,,,,NEG,,,, 3244,"I think this is not sufficient.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 3245,"Overall, I believe the idea is nice, and the initial analysis is good,[idea-POS, analysis-POS], [EMP-POS]",idea,analysis,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3246,"but I think the evaluation, especially against other methods, needs to be stronger.[evaluation-POS, methods-POS], [CMP-NEG]",evaluation,methods,,,,,CMP,,,,,POS,POS,,,,,NEG,,,, 3247,"Methods like neelakantan et al's multisense embedding, for example, which the work cites, can be used in some of these evaluations, specifically on those where covariate information clearly contributes (like contextual tasks).[work-NEU, evaluations-NEU], [EMP-NEU]",work,evaluations,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3248,"The addition of one or two tables with either a standard task against reported results or created tasks against downloadable contextual / tensor embeddings would be enough for me to change my vote. [tables-NEU, results-NEU], [REC-NEU]",tables,results,,,,,REC,,,,,NEU,NEU,,,,,NEU,,,, 3252,"The whole model can be seen as an RL agent, trained to do splitting action on a set of instances in such a way, that jointly trained predictor T quality is maximised (and thus its current log prob: log p(Y|P(X)) becomes a reward for an RL agent).[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3253,"Authors claim that model like this (strengthened with pointer networks/graph nets etc. depending on the application) leads to empirical improvement on three tasks - convex hull finding, k-means clustering and on TSP.[model-NEU, empirical improvement-NEU, tasks-NEU], [EMP-NEU]",model,empirical improvement,tasks,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 3254,"However, while results on convex hull task are good,[results-NEU], [EMP-POS]",results,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 3255,"k-means ones use a single, artificial problem (and do not test DCN, but rather a part of it), and on TSP DCN performs significantly worse than baselines in-distribution, and is better when tested on bigger problems than it is trained on.[problem-NEG, baselines-NEU], [EMP-NEG]",problem,baselines,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 3256,"However the generalisation scores themselves are pretty bad thus it is not clear if this can be called a success story.[scores-NEG], [EMP-NEG]",scores,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3257,"I will be happy to revisit the rating if the experimental section is enriched.[experimental section-NEG], [REC-NEU]",experimental section,,,,,,REC,,,,,NEG,,,,,,NEU,,,, 3258,"Pros: - very easy to follow idea and model[idea-POS, model-POS], [EMP-POS]",idea,model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3259,"- simple merge or RL and SL in an end-to-end trainable model - improvements over previous solutions[model-NEU, improvements-POS, previous solutions-NEU], [EMP-POS]",model,improvements,previous solutions,,,,EMP,,,,,NEU,POS,NEU,,,,POS,,,, 3260,"Cons: - K-means experiments should not be run on artificial dataset, there are plenty of benchmarking datasets out there.[experiments-NEG, benchmarking datasets-NEU], [EMP-NEG]",experiments,benchmarking datasets,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 3261,"In current form it is just a proof of concept experiment rather than evaluation (+ if is only for splitting, not for the entire architecture proposed).[experiments-NEU, evaluation-NEG, architecture proposed-NEU], [EMP-NEG]",experiments,evaluation,architecture proposed,,,,EMP,,,,,NEU,NEG,NEU,,,,NEG,,,, 3262,"It would be also beneficial to see the score normalised by the cost found by k-means itself (say using Lloyd's method), as otherwise numbers are impossible to interpret.[score-NEU], [EMP-NEU]",score,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3263,"With normalisation, claiming that it finds 20% worse solution than k-means is indeed meaningful.[solution-NEU], [EMP-NEU]",solution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3264,"- TSP experiments show that in distribution DCN perform worse than baselines, and when generalising to bigger problems they fail more gracefully, however the accuracies on higher problem are pretty bad, thus it is not clear if they are significant enough to claim success.[accuracies-NEG, problem-NEU, significant-NEG], [EMP-NEG]",accuracies,problem,significant,,,,EMP,,,,,NEG,NEU,NEG,,,,NEG,,,, 3265,"Maybe TSP is not the best application of this kind of approach (as authors state in the paper - it is not clear how merging would be applied in the first place).[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3266,"- in general - experimental section should be extended, as currently the only convincing success story lies in convex hull experiments[experimental section-NEU], [EMP-POS]",experimental section,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 3267,"Side notes: - DCN is already quite commonly used abbreviation for Deep Classifier Network as well as Dynamic Capacity Network, thus might be a good idea to find different name.[abbreviation-NEU], [EMP-NEU]",abbreviation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3268,"- please fix cite calls to citep, when authors name is not used as part of the sentence, for example: Graph Neural Network Nowak et al. (2017) should be Graph Neural Network (Nowak et al. (2017))[cite calls-NEU], [PNF-NEU]",cite calls,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3269,"# After the update Evaluation section has been updated threefold: - TSP experiments are now in the appendix rather than main part of the paper[experiments-NEU, main part-NEU], [PNF-NEG]",experiments,main part,,,,,PNF,,,,,NEU,NEU,,,,,NEG,,,, 3272,"Paper significantly benefited from these changes, however experimental section is still based purely on toy datasets (clustering cifar10 patches is the least toy problem, but if one claims that proposed method is a good clusterer one would have to beat actual clustering techniques to show that), and in both cases simple problem-specific baseline (Lloyd for k-means, greedy knapsack solver) beats proposed method.[Paper-NEU, experimental section-NEG, datasets-NEG, proposed method-NEU, baseline-NEU], [EMP-NEG]",Paper,experimental section,datasets,proposed method,baseline,,EMP,,,,,NEU,NEG,NEG,NEU,NEU,,NEG,,,, 3273,"I can see the benefit of trainable approach here,[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3274,"the fact that one could in principle move towards other objectives, where deriving Lloyd alternative might be hard; however current version of the paper still does not show that.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3275,"I increased rating for the paper,[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 3276,"however in order to put the clear accept mark I would expect to see at least one problem where proposed method beats all basic baselines (thus it has to either be the problem where we do not have simple algorithms for it, and then beating ML baseline is fine; or a problem where one can beat the typical heuristic approaches). [problem-NEU, proposed method-NEU, baselines-NEU, algorithms-NEU], [REC-NEU]",problem,proposed method,baselines,algorithms,,,REC,,,,,NEU,NEU,NEU,NEU,,,NEU,,,, 3279,"The proposed method was evaluated on the SQuAD dataset only, and marginal improvement was observed compared to the baselines.[proposed method-POS, improvement-NEU], [EMP-NEU]",proposed method,improvement,,,,,EMP,,,,,POS,NEU,,,,,NEU,,,, 3280,"(1) One concern I have for this paper is about the evaluation. The paper only evaluates the proposed method on the SQuAD data with systems submitted in July 2017, and the improvement is not very large.[evaluation-NEU], [CMP-NEU]",evaluation,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3281,"As a result, the results are not suggesting significance or generalizability of the proposed method.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3282,"(2) The paper gives some ablation tests like reducing the number of layers and removing the gate-specific question embedding, which help a lot for understanding how the proposed methods contribute to the improvement.[proposed methods-POS], [EMP-POS]",proposed methods,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3283,"However, the results show that the deeper self-attention layers are indeed useful (but still not improving a lot, about 0.7-0.8%).[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3284,"The other proposed components contribute less significant.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 3285,"As a result, I suggest the authors add more ablation tests regarding (1) replacing the outer-fusion with simple concatenation (it should work for two attention layers); (2) removing the inner-fusion layer and only use the final layer's output, and using residual connections (like many NLP papers did) instead of the more complicated GRU stuff.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3286,"(3) Regarding the ablation in Table 2, my first concern is that the improvement seems small (~0.5%). [Table-NEU], [IMP-NEU]",Table,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 3287,"As a result, I am wondering whether this separated question embedding really brings new information, or the similar improvement can be achieved by increasing the size of LSTM layers.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3288,"For example, if we use the single shared question embeddings, but increase the size from 128 to some larger number like 192, can we observe similar improvement.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3289,"I suggest the authors try this experiment as well and I hope the answer is no, as separated input embeddings for gate functions was verified to be useful in some old works with syntactic features as gate values, like Semantic frame identification with distributed word representations and Learning composition models for phrase embeddings etc.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3290,"(4) Please specify which version of the SQuAD leaderboard is used in Table 3.[Table-NEU], [CLA-NEU]",Table,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 3292,"Because this paper is not comparing to the state-of-the-art, no specification of the leaderboard version may confuse the other reviewers and readers. [paper-NEG], [CMP-NEG]",paper,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3293,"By the way, it will be better to compare to the snapshot of Oct 2017 as well, indicating the position of this work during the submission deadline.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 3294,"Minor issues: (1) There are typos in Figure 1 regarding the notations of Question Features and Passage Features.[typos-NEG, notations-NEG], [PNF-NEG]",typos,notations,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 3295,"(2) In Figure 1, I suggest adding an N times symbol to the left of the Q-P Attention Layer and remove the current list of such layers, in order to be consistent to the other parts of the figure.[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3296,"(3) What is the relation between the PhaseCond, QPAtt+b in Table 2 and the PhaseCond in Table 3?[Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3300,"No analysis is reported on how these affect the performance of GTI.[analysis-NEG, performance-NEU], [SUB-NEG]",analysis,performance,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 3302,"How important are these two methods to the success of GTI?[methods-NEU], [EMP-NEU]",methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3303,"Why is it reasonable to restore a k-by-k adjacency matrix from the standard uniform distribution (as stated in Section 2.1)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3304,"Why is the stride for the convolutional/deconvoluational layers set to 2 (as stated in Section 2.1)?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3310,"However, it is not clear how one selects a $re^{i}{G}$ from among the various i values.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3312,"Was the edge-importance reported in Section 2.3 checked against various measures of edge importance such as edge betweenness?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3313,"Table 1 needs more discussion in terms of retained edge percentage for ordered stages.[Table-NEG, discussion-NEG], [SUB-NEG]",Table,discussion,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 3314,"Should one expect a certain trend in these sequences?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3315,"Almost all of the experiments are qualitative and can be easily made quantitive by comparing PageRank or degree of nodes.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3316,"The discussion on graph sampling does not include how much of the graph was sampled.[discussion-NEU], [SUB-NEU]",discussion,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 3317,"Thus, the comparisons in Tables 2 and 3 are not fair. [comparisons-NEG, Tables-NEG], [EMP-NEG]",comparisons,Tables,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3318,"The most realistic graph generator is the BTER model. See http://www.sandia.gov/~tgkolda/bter_supplement/ and http://www.sandia.gov/~tgkolda/feastpack/doc_bter_match.html. A minor point: The acronym GTI is never defined.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3323,"(Pros) 1. The citations and related works cover fairly comprehensive and up-to-date literatures on domain adaptation and transfer learning.[citations-POS, related works-POS], [CMP-POS, SUB-POS]",citations,related works,,,,,CMP,SUB,,,,POS,POS,,,,,POS,POS,,, 3324,"2. Learning to output the k class membership probability and the loss in eqn 5 seems novel.[eqn-POS], [NOV-POS]",eqn,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3326,"For example, table 2 doesn't compare against two recent methods which report results exactly on the same dataset.[table-NEG], [CMP-NEG, SUB-NEG]",table,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 3327,"I checked the numbers in table 2 and the numbers aren't on par with the recent methods.[table-NEG], [CMP-NEG]",table,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3328,"1) Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks, Bousmalis et al. CVPR17, and 2) Learning Transferrable Representations for Unsupervised Domain Adaptation, Sener et al. NIPS16.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 3329,"Authors selectively cite and compare Sener et al. only in SVHN-MNIST experiment in sec 5.2.3 but not in the Office-31 experiments in sec 5.2.2.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 3330,"2. There are some typos in the related works section and the inferece procedure isn't clearly explained.[typos-NEG, related works section-NEG], [CLA-NEG]",typos,related works section,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 3331,"Perhaps the authors can clear this up in the text after sec 4.3.[typos-NEG, section-NEG], [CLA-NEU]",typos,section,,,,,CLA,,,,,NEG,NEG,,,,,NEU,,,, 3332,"(Assessment) Borderline. Refer to the Cons section above.[null], [REC-NEU]",null,,,,,,REC,,,,,,,,,,,NEU,,,, 3336,"The paper shows that the model achieves state of the art on SQuAD among published papers, and also quantitatively and visually demonstrates that having multiple layers of attention is helpful for context-context attention, while it is not so helpful for context-question attention.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3337,"Note: While I will mostly try to ignore recently archived, non-published papers when evaluating this paper, I would like to mention that the paper's ensemble model currently stands 11th on SQuAD leaderboard.[model-NEU], [IMP-NEU]",model,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 3338,"Pros: - The model achieves SOTA on SQuAD among published papers.[model-POS], [IMP-POS]",model,,,,,,IMP,,,,,POS,,,,,,POS,,,, 3339,"- The sequential fusing (GRU-like) of the multiple layers of attention is interesting and novel.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 3340,"Visual analysis of the attention map is convincing.[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3341,"- The paper is overall well-written and clear.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3342,"Cons: - Using different embedding for computing attention weights and getting attended vector is not entirely novel but rather an expected practice for many memory-based models, and should cite relevant papers.[null], [NOV-NEU, EMP-NEU]",null,,,,,,NOV,EMP,,,,,,,,,,NEU,NEU,,, 3344,"uses different embedding for key (computing attention weight) and value (computing attended vector).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3345,"- While ablations for number of attention layers (1 or 2) were visually convincing, numerically there is a very small difference even for selfAtt.[ablations-NEU], [EMP-NEU]",ablations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3346,"For instance, in Table 4, having two layers of selfAtt (with two layers of question-passage) only increases max F1 by 0.34, where the standard deviation is 0.31 for the one layer.[Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3347,"While this may be statistically significant, it is a very small gain nonetheless.[gain-NEU], [IMP-NEU]",gain,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 3348,"- Given the above two cons, the main contribution of the paper is 1.1% improvement over previous state of the art.[contribution-NEU], [IMP-NEU]",contribution,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 3350,"but I feel that it is not well-suited / sufficient for ICLR audience.[null], [APR-NEG]",null,,,,,,APR,,,,,,,,,,,NEG,,,, 3352,"Errors: - page 2 last para: gives an concrete -> gives a concrete - page 2 last para: matching -> matched Figure 1: I think passage embedding h and question embedding v boxes should be switched. - page 7 3.3 first para: evidence fully -> evidence to be fully.[Errors-NEG], [CLA-NEG]",Errors,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3361,"The paper has some interesting contributions and ideas, mainly from the point of view of applications, since the basic components (convnets, graph neural networks) are roughly similar to what is already proposed.[contributions-POS, ideas-POS], [EMP-POS, IMP-POS]",contributions,ideas,,,,,EMP,IMP,,,,POS,POS,,,,,POS,POS,,, 3362,"However, the novelty is hurt by the lack of clarity with respect to the model design.[novelty-NEG, clarity-NEG, model design-NEU], [CLA-NEG, NOV-NEG]",novelty,clarity,model design,,,,CLA,NOV,,,,NEG,NEG,NEU,,,,NEG,NEG,,, 3364,"If all nodes are connected to all nodes, what is the different of this model from a fully connected, multi-stream networks composed of S^2 branches?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3365,"To rephrase, what is the benefit of having a graph structure when all nodes are connected with all nodes.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3366,"Besides, what is the effect when having more and more support images?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3367,"Is the generalization hurt?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3368,"Second, it is not clear whether the label used as input in eq. (4) is a model choice or a model requirement.[eq-NEG], [EMP-NEG]",eq,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3369,"The reason is that the label already appears in the loss of the nodes in 5.1. Isn't using the label also as input redundant?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3370,"Third, the paper is rather vague or imprecise at points.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3371,"In eq. (1) many of the notations remain rather unclear until later in the text (and even then they are not entirely clear).[eq-NEU, notations-NEG], [EMP-NEG, PNF-NEG]",eq,notations,,,,,EMP,PNF,,,,NEU,NEG,,,,,NEG,NEG,,, 3372,"For instance, what is s, r, t.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 3373,"The experimental section is also ok, although not perfect.[experimental section-NEU], [EMP-NEU]",experimental section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3374,"The proposed method appears to have a modest improvement for few-shot learning.[proposed method-POS, improvement-POS], [EMP-POS]",proposed method,improvement,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3375,"However, in the case of active learning and semi-supervised learning the method is not compared to any baselines (other than the random one), which makes conclusions hard to reach.[method-NEU, baselines-NEU], [CMP-NEG]",method,baselines,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 3376,"In general, I tend to be in favor of accepting the paper if the authors have persuasive answers and provide the clarifications required.[paper-NEU], [REC-NEU]",paper,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 3379,"Pros: - PLAID masters several distinct tasks in sequence, building up ""skills"" by learning ""related"" tasks of increasing difficulty.[tasks-NEU], [EMP-POS]",tasks,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 3380,"- Although the main focus of this paper is on continual learning of ""related"" tasks, the authors acknowledge this limitation and convincingly argue for the chosen task domain.[paper-NEU, limitation-POS], [EMP-POS]",paper,limitation,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 3381,"Cons: - PLAID seems designed to work with task curricula, or sequences of deeply related tasks; for this regime, classical transfer learning approaches are known to work well (e.g finetunning), and it is not clear whether the method is applicable beyond this well understood case.[method-NEU], [EMP-NEG]",method,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3382,"- Are the experiments single runs?[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3383,"Due to the high amount of variance in single RL experiments it is recommended to perform several re-runs and argue about mean behaviour.[experiments-NEU], [SUB-NEU, EMP-NEU]",experiments,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 3384,"Clarifications: - What is the zero-shot performance of policies learned on the first few tasks, when tested directly on subsequent tasks?[performance-NEU, tasks-NEU], [EMP-NEU]",performance,tasks,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3385,"- How were the network architecture and network size chosen, especially for the multitasker?[architecture-NEU], [EMP-NEU]",architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3386,"Would policies generalize to later tasks better with larger, or smaller networks?[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3387,"- Was any kind of regularization used, how does it influence task performance vs. transfer?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3388,"- I find figure 1 (c) somewhat confusing.[figure-NEG], [PNF-NEG]",figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3389,"Is performance maintained only on the last 2 tasks, or all previously seen tasks?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3390,"That's what the figure suggests at first glance, but that's a different goal compared to the learning strategies described in figures 1 (a) and (b). [figures-NEU], [EMP-NEU]",figures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3396,"I think that defining metrics for evaluating the degree of disentanglement in representations is great problem to look at.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3397,"Overall, the metrics approached by the authors are reasonable, though the way pseudo-distribution are define in terms of normalized weight magnitudes is seems a little ad hoc to me.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3398,"A second limitation of the work is the reliance on a true set of disentangled factors.[limitation-NEU], [IMP-NEU]",limitation,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 3400,"Could the authors perhaps comment on how well these metrics would work in the semi-supervised case?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3401,"Overall, I would say this is somewhat borderline, but I could be convinced to argue for acceptance based on the other reviews and the author response.[acceptance-NEU], [REC-NEU]",acceptance,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 3402,"Minor Commments: - Tables 1 and 2 would be easier to unpack if the authors were to list the names of the variables (i.e. azimuth instead of z_0) or at least list what each variable is in the caption.[Tables-NEU], [PNF-NEU]",Tables,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3403,"- It is not entirely clear to me how the proposed metrics, whose definitions all reference magnitudes of weights, generalize to the case of random forests. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3412,"The approach is evaluated on an in-house dataset and a public NIH dataset, demonstrating good performance, and illustrative visual rationales are also given for MNIST.[approach-POS, performance-POS], [EMP-POS]",approach,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3413,"The idea in the paper is, to my knowledge, novel, and represents a good step toward the important task of generating interpretable visual rationales.[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3414,"There are a few limitations, e.g. the difficulty of evaluating the rationales, and the fact that the resolution is fixed to 128x128 (which means discarding many pixels collected via ionizing radiation), but these are readily acknowledged by the authors in the conclusion.[limitations-NEG], [EMP-NEG]",limitations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3415,"Comments: 1) There are a few details missing, like the batch sizes used for training (it is difficult to relate epochs to iterations without this).[details-NEG], [SUB-NEG]",details,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3416,"Also, the number of hidden units in the 2 layer MLP from para 5 in Sec 2.[para-NEU, Sec-NEU], [SUB-NEU]",para,Sec,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 3417,"2) It would be good to include PSNR/MSE figures for the reconstruction task (fig 2) to have an objective measure of error.[error-NEU], [EMP-NEU]",error,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3418,"3) Sec 2 para 4: the reconstruction loss on the validation set was similar to the reconstruction loss on the validation set -- perhaps you could be a little more precise here.[Sec-NEU], [SUB-NEG, CLA-NEU]",Sec,,,,,,SUB,CLA,,,,NEU,,,,,,NEG,NEU,,, 3419,"E.g. learning curves would be useful. 4) Sec 2 para 5: paired with a BNP blood test that is correlated with heart failure I suspect many readers of ICLR, like myself, will not be well versed in this test, correlation with HF, diagnostic capacity, etc., so a little further explanation would be helpful here.[Sec-NEU, para-NEU], [SUB-NEG]",Sec,para,,,,,SUB,,,,,NEU,NEU,,,,,NEG,,,, 3420,"The term correlated is a bit too broad, and it is difficult for a non-expert to know exactly how correlated this is.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3421,"It is also a little confusing that you begin this paragraph saying that you are doing a classification task, but then it seems like a regression task which may be postprocessed to give a classification.[task-NEU], [EMP-NEG]",task,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3422,"Anyway, a clearer explanation would be helpful.[explanation-NEU], [CLA-NEU]",explanation,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 3423,"Also, if this test is diagnostic, why use X-rays for diagnosis in the first place?[test-NEU], [EMP-NEU]",test,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3424,"5) I would have liked to have seen some indicative times on how long the optimization takes to generate a visual rationale, as this would have practical implications..[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3425,"6) Sec 2 para 7: L_target is a target objective which can be a negative class probability or in the case of heart failure, predicted BNP level -- for predicted BNP level, are you treating this as a probability and using cross entropy here, or mean squared error?.[Sec-NEU, para-NEU], [EMP-NEU]",Sec,para,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3426,"7) As always, it would be illustrative if you could include some examples of failure cases, which would be helpful both in suggesting ways of improving the proposed technique, and in providing insight into where it may fail in practical situations.[examples-NEG, proposed technique-NEU], [SUB-NEG]",examples,proposed technique,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 3432,"However one of the problems of this paper is clarity.[clarity-NEG], [CLA-NEG]",clarity,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3433,"- The notion of cluster is still unclear and it took me long to understand it probably because it might be easily confused with other terminology, e.g., clustering.[terminology-NEG], [PNF-NEG]",terminology,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3434,"Also, cluster-to-cluster might not fit well.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3435,"- It is hard to map System-{ABCD} to the underlying proposed methods described in Table 2.[Table-NEG], [PNF-NEG]",Table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3436,"Also, I feel algorithm 1 is spurious given that it merely switch by systems.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3437,"Probably better to introduce branch for key methods, parallel sampling/ translation broadcasting and inadaptive or adaptive model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3441,"The proposed method seems effective, and the proposed DSAE metric is nice, though it's surprising if previous papers have not used metrics similar to normalized reduction in log-ppl[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3442,". The datasets considered in the experiments are also large, another plus.[datasets-POS, experiments-NEU], [SUB-POS]",datasets,experiments,,,,,SUB,,,,,POS,NEU,,,,,POS,,,, 3443,"However, overall, the paper is difficult to read and parse, especially since low-level details are weaved together with higher-level points throughout, and are often not motivated.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3445,""" These two sections are simply too anecdotal, although it is nice being stepped through the reasoning for the single example considered in Section 3.3.[Section-NEU], [EMP-POS]",Section,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 3446,"Some quantitative or aggregate results are needed, and it should at least be straightforward to do so using human evaluation for a subset of examples for diverse decoding.[aggregate results-NEU], [EMP-NEU]",aggregate results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3450,"Although, the observations are interesting, especially the one on MNIST where the network performs well even with correct labels slightly above chance, the overall contributions are incremental.[observations-POS, contributions-POS], [EMP-POS]",observations,contributions,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3452,"Agreed that the authors do a more detailed study on simple MNIST classification, but these insights are not transferable to more challenging domains.[study-POS, insights-NEG], [EMP-NEG]",study,insights,,,,,EMP,,,,,POS,NEG,,,,,NEG,,,, 3453,"The main limitation of the paper is proposing a principled way to mitigate noise as done in Sukhbataar et.al. (2014), or an actionable trade-off between data acquisition and training schedules.[limitation-NEG], [EMP-NEG]",limitation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3454,"The authors contend that the way they deal with noise (keeping number of training samples constant) is different from previous setting which use label flips.[setting-NEU], [EMP-NEU]",setting,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3455,"However, the previous settings can be reinterpreted in the authors setting.[settings-NEU], [EMP-NEU]",settings,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3456,"I found the formulation of the alpha to be non-intuitive and confusing at times.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3459,"This can be improved to help readers understand better.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3460,"There are several unanswered questions as to how this observation transfers to a semi-supervised or unsupervised setting, and also devise architectures depending on the level of expected noise in the labels.[settings-NEG, architectures-NEU], [SUB-NEG]",settings,architectures,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 3461,"Overall, I feel the paper is not up to mark and suggest the authors devote using these insights in a more actionable setting.[paper-NEG, setting-NEU], [REC-NEG]",paper,setting,,,,,REC,,,,,NEG,NEU,,,,,NEG,,,, 3462,"Missing citation: Training Deep Neural Networks on Noisy Labels with Bootstrapping, Reed et al. [citation-NEG], [SUB-NEG]",citation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3467,"While the paper reports superior performance, the empirical claims are not well substantiated.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3468,"It is *not* true that given CBOW, it's not important to compare with SGNS and GloVe.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 3469,"In fact, in certain cases such as unsupervised word analogy, SGNS is clearly and vastly superior to other techniques (Stratos et al., 2015).[techniques-NEG], [CMP-NEG]",techniques,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3470,"The word similarity scores are also generally low: it's easy to achieve >0.76 on MEN using the plain PPMI matrix factorization on Wikipedia.[scores-NEG], [EMP-NEG]",scores,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3471,"So it's hard to tell if it's real improvement.[improvement-NEG], [EMP-NEG]",improvement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3473,"The proposed approach is simple and has an appealing compositional feature,[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3474,"but the work is not adequately validated and the novelty is somewhat limited.[work-NEG], [NOV-NEG]",work,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 3476,"Originality: Low-rank tensors have been used to derive features in many prior works in NLP (e.g., Lei et al., 2014).[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 3477,"The paper's particular application to learning word embeddings (PPMI factorization), however, is new although perhaps not particularly original.[application-NEG], [NOV-NEG]",application,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 3478,"The observation on multiplicative compositionality is the main strength of the paper.[observation-POS, paper-POS], [EMP-POS]",observation,paper,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3480,"For those interested in word embeddings, this work suggests an alternative training technique, but it has some issues (described above).[work-NEU, technique-NEG], [EMP-NEG]",work,technique,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 3484,"They conduct experiments on ImageNet-1k with variants of ResNets and multiple low precision regimes and compare performance with previous works [variants-NEU, previous works-NEU], [CMP-NEU]",variants,previous works,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 3485,"Pros: (+) The paper is well written, the schemes are well explained (+) Ablations are thorough and comparisons are fair [Ablations-POS, comparisons-POS, thorough-POS], [CLA-POS, CMP-POS, EMP-POS]",Ablations,comparisons,thorough,,,,CLA,CMP,EMP,,,POS,POS,POS,,,,POS,POS,POS,, 3486,"Cons: (-) The gap with full precision models is still large (-)[gap-NEG, models-NEG], [EMP-NEG]",gap,models,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3487,"Transferability of the learned low precision models to other tasks is not discussed [models-NEG], [SUB-NEG, EMP-NEG]",models,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 3488,"The authors tackle a very important problem, the one of learning low precision models without comprosiming performance.[problem-POS, performance-NEU], [NOV-POS]",problem,performance,,,,,NOV,,,,,POS,NEU,,,,,POS,,,, 3490,"One observation not discussed by the authors is that the performance of the student network under each low precision regime doesn't improve with deeper teacher networks (see Table 1, 2 & 3).[performance-NEG, observation-NEG], [SUB-NEG, EMP-NEG]",performance,observation,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3491,"As a matter of fact, under some scenarios performance even decreases.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3492,"The authors do not discuss the gains of their best low-precision regime in terms of computation and memory.[gains-NEG], [SUB-NEG]",gains,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3493,"Finally, the true applications for models with a low memory footprint are not necessarily related to image classification models (e.g. ImageNet-1k).[applications-NEU], [EMP-NEU]",applications,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3494,"How good are the low-precision models trained by the authors at transferring to other tasks?[models-NEU], [SUB-NEU, EMP-NEU]",models,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 3495,"Is it possible to transfer student-teacher training practices to other tasks?[null], [SUB-NEU]]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 3499,"The presentation of the paper is unnecessarily complex.[presentation-NEG], [PNF-NEG]",presentation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3500,"It seems that authors spend extra space creating problems and then solving them.[problems-NEG], [EMP-NEG]",problems,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3501,"Although some of the derivations in Section 3.2.2 are a bit involved, most of the derivations up to that point (which is already in page 6) follow preexisting literature.[Section-NEU, derivations-NEU], [NOV-NEU]",Section,derivations,,,,,NOV,,,,,NEU,NEU,,,,,NEU,,,, 3502,"For instance, eq. (3) proposes one model for p(F|X). Eq. (8) proposes a different model for p(F|X), which is an approximation to the previous one.[eq-NEU], [EMP-NEU]",eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3503,"Instead, the second model could have been proposed directly, with the appropriate citation from the literature, since it isn't new.[citation-NEU], [CMP-NEU]",citation,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3504,"Eq. (13) is introduced as a solution to a non-existent problem, because the virtual observations are drawn from the same prior as the real ones, so it is not that we are coming up with a convenient GP prior that turns out to produce a computationally tractable solution, we are just using the prior on the observations consistently. In general, the authors seem to use approximately equal and equal interchangeably, which is incorrect.[Eq-NEU], [EMP-NEG]",Eq,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3505,"There should be a single definition for p(F|X).[definition-NEU], [EMP-NEU]",definition,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3506,"And there should be a single definition for L_pred.[definition-NEU], [EMP-NEU]",definition,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3507,"The expression for L_pred given in eq. (20) (exact) and eq. (41) (approximate) do not match and yet both are connected with an equality (or proportionality), which they shouldn't.[eq-NEG], [EMP-NEG]",eq,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3508,"Q(A) is sometimes taken to mean the true posterior (i.e., eq. (31)), sometimes a Gaussian approximation (i.e., eq (32) inside the integral), and both are used interchangeably.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3509,"- Incorrect references to the literature[literature-NEG], [CMP-NEG]",literature,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3510,"Page 3: using virtual observations (originally proposed by Quiu00f1onero-Candela & Rasmussen (2005) for sparse approximations of GPs) The authors are citing as the origin of virtual observations a survey paper on the topic.[topic-NEU], [NOV-NEU]",topic,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 3516,"Can the authors guarantee that the variational bound that they are introducing (as defined in eqs. (19) and (41)) is actually a variational bound?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3517,"It seems to me that the approximations made to Q(A) to propagate the uncertainty are breaking the bounding guarantee.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3518,"If it is no longer a lower bound, what is the rationale behind maximizing it?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3522,"the authors manage to avoid the additional Q(A) approximation that breaks the variational bound.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3524,"and discuss if and why that additional central limit theorem application is necessary.[approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3526,"The use of a non-parametric definition for the activation function should be contrasted with the use of a parametric one.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 3527,"With enough data, both might produce similar results.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3528,"And the parameter sharing in the parametric one might actually be beneficial.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3529,"With no experiments at all showing the benefit of this proposal, this paper cannot be considered complete.[experiments-NEG, paper-NEG], [SUB-NEG, IMP-NEG]",experiments,paper,,,,,SUB,IMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3530,"- Minor errors: Eq. (4), for consistency, should use the identity matrix for the covariance matrix definition.[Eq-NEU], [EMP-NEU]",Eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3531,"Eq. (10) uses subscript d where it should be using subscript n Eq.[Eq-NEU], [PNF-NEG]",Eq,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 3532,"(17) includes p(X^L|F^L) in the definition of Q(...), but it shouldn't.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3533,"That was particularly misleading, since if we take eq. (17) to be correct (which I did at first), then p(X^L|F^L) cancels out and should not appear in eq. (20).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3534,"Eq. (23) uses Q(F|A) to mean the same as P(F|A) as far as I understand. Then why use Q?[Eq-NEU], [EMP-NEU]",Eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3545,"In this paper the authors give a nice review of clustering methods with deep learning and a systematic taxonomy for existing methods.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3546,"Finally, the authors propose a new method by using one unexplored combination of taxonomy features.[method-POS], [NOV-POS]",method,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3547,"The paper is well-written and easy to follow[paper-POS], [CLA-POS]. The proposed combination is straightforward,[null], [PNF-POS, EMP-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3548,"but lack of novelty.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 3549,"From table 1, it seems that the only differences between the proposed method and DEPICK is whether the method uses balanced assignment and pretraining.[proposed method-NEU, method-NEU], [CMP-NEG, PNF-NEG, EMP-NEG]",proposed method,method,,,,,CMP,PNF,EMP,,,NEU,NEU,,,,,NEG,NEG,NEG,, 3550,"I am not convinced that these changes will lead to a significant difference.[changes-NEG], [EMP-NEG]",changes,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3551,"The performance of the proposed method and DEPICK are also similar in table 1.[proposed method-NEG], [NOV-NEG, EMP-NEG]",proposed method,,,,,,NOV,EMP,,,,NEG,,,,,,NEG,NEG,,, 3552,"In addition, the experiments section is not comprehensive enough as well the author only tested on two datasets.[section-NEG, datasets-NEG], [SUB-NEG, EMP-NEG]",section,datasets,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3553,"More datasets should be tested for evaluation.[datasets-NEG], [SUB-NEG]",datasets,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3554,"In addition, It seems that nearly all the experiments results from comparison methods are borrowed from the original publications.[results-NEG, methods-NEG], [NOV-NEG, CMP-NEG, EMP-NEG]",results,methods,,,,,NOV,CMP,EMP,,,NEG,NEG,,,,,NEG,NEG,NEG,, 3555,"The authors should finish the experiments on comparison methods and fill the entries in Table 1. [experiments-NEG], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 3556,"In summary, the proposed method is lack of novelty compare to existing methods.[novelty-NEG, existing methods-NEG], [NOV-NEG, CMP-NEG, EMP-NEG]",novelty,existing methods,,,,,NOV,CMP,EMP,,,NEG,NEG,,,,,NEG,NEG,NEG,, 3558,"however extensive experiments should be conducted by running existing methods on different datasets and analyzing the pros and cons of the methods and their application scenarios. [experiments-NEG, datasets-NEG], [SUB-NEG, EMP-NEG]",experiments,datasets,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3559,"Therefore, I think the paper cannot be accepted at this stage. [paper-NEG], [REC-NEG]]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 3561,"The experimental results are similar to previously proposed methods.[experimental results-NEU], [EMP-NEU]",experimental results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3562,"The paper is fairly well-written, provides proofs of detailed properties of the algorithm, and has decent experimental results.[paper-POS, experimental results-POS], [CLA-POS, EMP-POS, SUB-POS]",paper,experimental results,,,,,CLA,EMP,SUB,,,POS,POS,,,,,POS,POS,POS,, 3563,"However, the method is not properly motivated.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3564,"As far as I can tell, the paper never answers the questions: Why do we need a guide actor?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3565,"What problem does the guide actor solve?[problem-NEG], [EMP-NEG]",problem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3566,"The paper argues that the guide actor allows to introduce second order methods, but (1) there are other ways of doing so and[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3567,"(2) it's not clear why we should want to use second-order methods in reinforcement learning in the first place.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3568,"Using second order methods is not an end in itself.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3569,"The experimental results show the authors have found a way to use second order methods without making performance *worse*.[experimental results-NEU, performance-NEU], [EMP-NEU]",experimental results,performance,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3570,"Given the high variability of deep RL, they have not convincingly shown it performs better.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3571,"The paper does not discuss the computational cost of the method.[method-NEU], [SUB-NEG]",method,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 3572,"How does it compare to other methods?[other methods-NEU], [CMP-NEG]",other methods,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 3573,"My worry is that the method is more complicated and slower than existing methods, without significantly improved performance.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3574,"I recommend the authors take the time to make a much stronger conceptual and empirical case for their algorithm. [algorithm-NEU], [EMP-NEU, REC-NEU]",algorithm,,,,,,EMP,REC,,,,NEU,,,,,,NEU,NEU,,, 3576,"This paper proposes a novel regularization scheme for Wasserstein GAN based on a relaxation of the constraints on the Lipschitz constant of 1.[paper-POS], [NOV-POS]",paper,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3579,"Numerical experiments suggests that the proposed regularization leads to better posed optimization problem and even a slight advantage in terms of inception score on the CIFAR-10 dataset.[experiments-POS, proposed-POS, score-POS], [EMP-POS]",experiments,proposed,score,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 3580,"The paper is interesting and well written, the proposed regularization makes sens since it is basically a relaxation of the constraints and the numerical experiments also suggest it's a good idea.[paper-POS, proposed-POS, numerical experiments-POS], [CLA-POS, EMP-POS]",paper,proposed,numerical experiments,,,,CLA,EMP,,,,POS,POS,POS,,,,POS,POS,,, 3581,"Still as discussed below the justification do not address a lots of interesting developments and implications of the method and should better discuss the relation with regularized optimal transport.[justification-NEG, method-NEG], [SUB-NEG]",justification,method,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 3582,"Discussion: + The paper spends a lot of time justifying the proposed method by discussing the limits of the Improved training of Wasserstein GAN from Gulrajani et al. (2017).[proposed method-NEU], [CMP-NEU]",proposed method,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3583,"The two limits (sampling from marginals instead of optimal coupling and differentiability of the critic) are interesting and indeed suggest that one can do better but the examples and observations are well known in OT and do not require proof in appendix.[examples-NEU, observations-NEU, limits-POS], [EMP-POS]",examples,observations,limits,,,,EMP,,,,,NEU,NEU,POS,,,,POS,,,, 3584,"The reviewer believes that this space could be better spend discussing the theoretical implication of the proposed regularization (see next).[proposed regularization-NEU], [NOV-NEU]",proposed regularization,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 3585,"+ The proposed approach is a relaxation of the constraints on the dual variable for the OT problem.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3586,"As a matter of fact we can clearly recognize a squared hinge loss is the proposed loss.[proposed loss-POS], [EMP-POS]",proposed loss,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3587,"This approach (relaxing a strong constraint) has been used for years when learning support vector machines and ranking and a small discussion or at least reference to those venerable methods would position the paper on a bigger picture.[approach-POS, discussion-NEU, reference-NEU], [IMP-POS, EMP-POS]",approach,discussion,reference,,,,IMP,EMP,,,,POS,NEU,NEU,,,,POS,POS,,, 3588,"+ The paper is rather vague on the reason to go from Eq. (6) to Eq. (7). (gradient approximation between samples to gradient on samples).[paper-NEU, reason-NEG], [EMP-NEG]",paper,reason,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 3591,"recent NN toolbox can easily compute the exact gradient and use it for the penalization but this is not clearly discussed even in appendix.[appendix-NEG], [SUB-NEG]",appendix,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3592,"Numerical experiments comparing the two implementation or at least a discussion is necessary.[experiments-NEU, discussion-NEU], [SUB-NEU]",experiments,discussion,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 3594,"for a long list of regularizations) and more precisely to the euclidean regularization.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3595,"I understand that GANS (and Wasserstein GAN) is a relatively young community and that references list can be short but their is a large number of papers discussing regularized optimal transport and how the resulting problems are easier to solve.[references list-NEU, papers-NEU], [CMP-NEU]",references list,papers,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 3596,"A discussion of the links is necessary and will clearly bring more theoretical ground to the method.[links-NEU], [CMP-NEU]",links,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3597,"Note that a square euclidean regularization leads to a regularization term in the dual of the form max(0,f(x)+f(y)-|x-y|)^2 that is very similar to the proposed regularization.[proposed regularization-NEU], [CMP-NEU]",proposed regularization,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3599,"+ The numerical experiments are encouraging but a bit short.[numerical experiments-POS], [EMP-POS, SUB-NEG]",numerical experiments,,,,,,EMP,SUB,,,,POS,,,,,,POS,NEG,,, 3600,"The 2D example seem to work very well and the convergence curves are far better with the proposed regularization.[example-POS, proposed regularization-POS], [EMP-POS]",example,proposed regularization,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3601,"But the real data CIFAR experiments are much less detailed with only a final inception score (very similar to the competing method) and no images even in appendix.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3602,"The authors should also define (maybe in appendix) the conditional and unconditional inception scores and why they are important (and why only some of them are computed in Table 1).[scores-NEG, Table-NEG], [SUB-NEG]",scores,Table,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 3604,"The comparison of the dual critic to the true Wasserstein distance is very interesting.[comparison-POS], [CMP-POS]",comparison,,,,,,CMP,,,,,POS,,,,,,POS,,,, 3605,"It would be nice to see the behavior for different values of lambda.[behavior-NEG], [SUB-NEG]",behavior,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3608,"Review update after reply: The authors have responded to most of my concerns and I think the paper is much stronger now and discuss the relation with regularized OT. I change the rating to Accept. 001 101[concerns-POS, paper-POS, rating-POS], [REC-POS]]",concerns,paper,rating,,,,REC,,,,,POS,POS,POS,,,,POS,,,, 3612,"Overall, I think this is a good paper and its core contribution is definitely valuable: it provides a novel analysis of an algorithmic task which sheds light on how and when the network fails to learn the algorithm, and in particular the role which initialization plays.[paper-POS, contribution-POS, analysis-POS, task-POS], [NOV-POS, EMP-POS, IMP-POS]",paper,contribution,analysis,task,,,NOV,EMP,IMP,,,POS,POS,POS,POS,,,POS,POS,POS,, 3613,"The analysis is very thorough and the methods described may find use in analyzing other tasks.[analysis-POS, methods-POS], [SUB-POS]",analysis,methods,,,,,SUB,,,,,POS,POS,,,,,POS,,,, 3614,"In particular, this could be a first step towards better understanding the optimization landscape of memory-augmented neural networks (Memory Networks, Neural Turing Machines, etc) which try to learn reasoning tasks or algorithms.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 3615,"It is well-known that these are sensitive to initialization and often require running the optimizer with multiple random seeds and picking the best one.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3618,"With that being said, there is some work that needs to be done to make the paper clearer.[paper-NEU], [CLA-NEU]",paper,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 3619,"In particular, many parts are quite technical and may not be accessible to a broader machine learning audience.[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 3620,"It would be good if the authors spent more time developing intuition (through visualization for example) and move some of the more technical proofs to the appendix.[proofs-NEU, appendix-NEU], [PNF-NEU]",proofs,appendix,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 3621,"Specifically: - I think Figure 3 in the appendix should be moved to the main text, to help understand the behavior of the analytical solution.[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3622,"- Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix.[page-NEU, Appendix-NEU], [PNF-NEU]",page,Appendix,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 3623,"- Section 6: there is lots of math here, but the main results don't obviously stand out.[Section-NEU], [PNF-NEU]",Section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3624,"I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof), so that the casual reader can quickly see what the main results are.[equations-NEU, main results-NEU], [PNF-NEU]",equations,main results,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 3626,"Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful.[Equations-NEU], [EMP-NEU]",Equations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3627,"Also, although I found their work to be interesting after finishing the paper, I was initially confused by how the authors frame their work and where the paper was heading.[work-NEU], [PNF-NEU]",work,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3630,"Here the assumptions of locality and stationarity underlying CNNs are sensible and I don't think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary.[paragraph-NEG, Section-NEG], [EMP-NEG]",paragraph,Section,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3631,"However, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms (such as the Neural Turing Machine and variants) or reasoning tasks more generally (for example, memory-augmented networks applied to the bAbI tasks).[work-NEU], [CMP-NEU]",work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3632,"There are lots of small typos, please fix them.[typos-NEG], [PNF-NEG]",typos,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3633,"Here are a few: - For L 16, batch size of 20, ...: not a complete sentence. [null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3634,"- Right before 6.1.1: when the these such -> when such[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3635,"- Top of page 8: it also have a -> it also has a, when encountering larger dataset[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3636,"-> ...datasets - First sentence of 6.2: we turn to the discuss a second -> [null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3637,"we turn to the discussion of a second - etc.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 3638,"Quality: High Clarity: medium-low Originality: high[Clarity-NEU, Originality-POS], [CLA-NEU, NOV-POS]",Clarity,Originality,,,,,CLA,NOV,,,,NEU,POS,,,,,NEU,POS,,, 3642,"https://arxiv.org/pdf/1707.03497.pdf [Significance-NEU, References-NEU], [IMP-NEU, CMP-NEU]",Significance,References,,,,,IMP,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 3644,"pros: This is a great paper - I enjoyed reading it.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3645,"The authors lay down a general method for addressing various transfer learning problems: transferring across domains and tasks and in a unsupervised fashion.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3646,"The paper is clearly written and easy to understand[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3647,". Even though the method combines the previous general learning frameworks, the proposed algorithm for LEARNABLE CLUSTERING OBJECTIVE (LCO) is novel, and fits very well in this framework.[proposed algorithm-POS], [NOV-POS]",proposed algorithm,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3648,"Experimental evaluation is performed on several benchmark datasets - the proposed approach outperforms state-of-the-art for specific tasks in most cases.[Experimental evaluation-POS, benchmark datasets-POS], [EMP-POS]",Experimental evaluation,benchmark datasets,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3649,"cons/suggestions: - the authors should discuss in more detail the limitations of their approach: it is clear that when there is a high discrepancy between source and target domains, that the similarity prediction network can fail.[limitations-NEU], [EMP-NEU]",limitations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3654,"The proposed methods and experiments are not understandable in the current way the paper is written: there is not a single equation, pseudo-code algorithm or pointer to real code to enable the reader to get a detailed understanding of the process.[proposed methods-NEG, experiments-NEG, equation-NEG], [EMP-NEG]",proposed methods,experiments,equation,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 3655,"All we have a besides text is a small figure (figure 1).[text-NEU, figure-NEU], [PNF-NEU]",text,figure,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 3656,"Then we have to trust the authors that on their modified dataset, the accuracies of the proposed method is around 100% while not using this method yields 0% accuracies?[accuracies-NEU], [EMP-NEU]",accuracies,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3657,"The initial description (section 2) leaves way too many unanswered questions: - What embeddings are used for words detected as NE?[description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3661,"(a) To retrieve the key (a vector) given the value (a string) as the encoder input.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3662,"(b) To find the value that best matches a key at the decoder stage?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3663,"- Exact description of the column attention mechanism: some similarity between a key embedding and embeddings representing each column? Multiplicative? Additive?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3665,"Do we need to give the name of the column the Attention-Column-Query attention should focus on?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3666,"Because of this unknown, I could not understand the experiment setup and data formatting![experiment setup-NEG], [EMP-NEG]",experiment setup,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3667,"The list goes on... For such a complex architecture, the authors must try to analyze separate modules as much as possible.[architecture-NEU], [EMP-NEU]",architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3668,"As neither the QA and the Babi tasks use the RNN dialog manager, while not start with something that only works at the sentence level.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3669,"The Q&A task could be used to describe a simpler system with only a decoder accessing the DB table.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3670,"Complexity for solving the Babi tasks could be added later.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3672,"This paper proposes to jointly learning a semantic objective and inducing a binary tree structure for word composition, which is similar to (Yogatama et al, 2017).[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3673,"Differently from (Yogatama et al, 2017), this paper doesn't use reinforcement learning to induce a hard structure, but adopts a chart parser manner and basically learns all the possible binary parse trees in a soft way.[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3674,"Overall, I think it is really an interesting direction and the proposed method sounds reasonable.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3675,"However, I am concerned about the following points: - The improvements are really limited on both the SNLI and the Reverse Dictionary tasks.[tasks-NEG], [EMP-NEG]",tasks,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3676,"(Yogatama et al, 2017) demonstrate results on 5 tasks and I think it'd be helpful to present results on a diverse set of tasks and see if conclusions can generally hold.[results-NEU, conclusions-NEG], [EMP-NEG]",results,conclusions,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 3677,"Also, it would be much better to have a direct comparison to (Yogatama et al, 2017), including the performance and also the induced tree structures.[direct comparison-NEG, performance-NEG], [SUB-NEG, CMP-NEG]",direct comparison,performance,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3678,"- The computational complexity of this model shouldn't be neglected.[computational complexity-NEG], [EMP-NEG]",computational complexity,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3680,"This should be at least discussed in the paper.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3681,"And I am not also sure how hard this model is being converged in all experiments (compared to LSTM or supervised tree-LSTM).[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3682,"n - I am wondering about the effects of the temperature parameter t. Is that important for training?[parameter-NEG], [CLA-NEG]",parameter,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3683,"Minor: - What is the difference between LSTM and left-branching LSTM?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 3684,"- I am not sure if the attention overt chart is a highlight of the paper or not.[chart-NEG, highlight-NEG], [PNF-NEU]",chart,highlight,,,,,PNF,,,,,NEG,NEG,,,,,NEU,,,, 3685,"If so, better move that part to the models section instead of mention it briefly in the experiments section.[section-NEU], [PNF-NEU]",section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3686,"Also, if any visualization (over the chart) can be provided, that'd be helpful to understand what is going on.[visualization-NEG], [SUB-NEG]]",visualization,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3688,"The proposed framework, task graph solver (NTS), consists of many approximation steps and representations: CNN to capture environment states, task graph parameterization, logical operator approximation; the idea of reward-propagation policy helps pre-training.[proposed framework-POS], [EMP-POS]",proposed framework,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3689,"The framework is evaluated on a relevant multi-task problem.[framework-POS], [EMP-POS]",framework,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3690,"In general, the paper proposes an idea to tackle an interesting problem.[paper-POS, problem-POS], [EMP-POS]",paper,problem,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3691,"It is well written, the idea is well articulated and presented.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3692,"The idea to represent task graphs are quite interesting.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3693,"However it looks like the task graph itself is still simple and has limited representation power.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3694,"Specifically, it poses just little constraints and presents no stochasticity (options result in stochastic outcomes).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3696,"The task itself is not too complex which involves 10 objects, and a small set of deterministic options[task-NEU], [EMP-NEU]. It might be only complex when the number of dependency layer is large.[null], [EMP-NEU]",task,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3697,"However, it's still more convinced if the paper method is demonstrated in more domains.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3699,"- How the MDP M and options are defined, e.g. transition functions, are tochastic?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3700,"- What is the objective of the problem in section 3[objective-NEG, problem-NEU, Section-NEG], [CLA-NEG]",objective,problem,Section,,,,CLA,,,,,NEG,NEU,NEG,,,,NEG,,,, 3701,"Related work: many related work in robotics community on the topic of task and motion planning (checkout papers in RSS, ICRA, IJRR, etc.) should also be discussed.[related work-NEG], [CMP-NEG]",related work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3705,"The proposed method is then tested on two image data sets. [proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3706,"The claimed main contribution of the paper is the taxonomy.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3707,"There are no new things in such kind of reviews.[reviews-NEG], [EMP-NEG, NOV-NEG]",reviews,,,,,,EMP,NOV,,,,NEG,,,,,,NEG,NEG,,, 3708,"The taxonomy gives no scientific axioms.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3709,"Therefore the impact or actual contribution to the ICLR community is very limited. [contribution-NEG], [APR-NEG, IMP-NEG]",contribution,,,,,,APR,IMP,,,,NEG,,,,,,NEG,NEG,,, 3710,"The proposed clustering method is problematic.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3711,"It is hard to set the paramter alpha.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3712,"The experimental results are also disappointing.[experimental results-NEG], [EMP-NEG]",experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3713,"For example, the COIL20 accuracy is only 0.762, much worse than the state of the art.[accuracy-NEG], [EMP-NEG]",accuracy,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3714,"Moreover, results on only two image data sets are not sufficient for convincing.[results-NEG, data sets-NEG], [SUB-NEG]",results,data sets,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 3717,"The tackled problem is a hard task in computational biology, and the proposed solution Kittyhawk, although designed with very standard ingredients (several layers of CNN inspired to the VGG structure), seems to be very effective on both the shown datasets.[problem-NEG], [EMP-POS]",problem,,,,,,EMP,,,,,NEG,,,,,,POS,,,, 3718,"The paper is well written (up to a few misprints), the introduction and the biological background very accurate (although a bit technical for the broader audience) and the bibliography reasonably complete.[paper-POS, introduction-POS], [CLA-POS]",paper,introduction,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 3719,"Maybe the manuscript part with the definition of the accuracy measures may be skipped.[manuscript-NEU], [PNF-NEU]",manuscript,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 3721,"I would only suggest to expand the experimental section with further (real) examples to strengthen the claim.[experimental section-NEU], [SUB-NEU]",experimental section,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 3722,"Overall, I rate this manuscript in the top 50% of the accepted papers.[manuscript-POS], [REC-POS]",manuscript,,,,,,REC,,,,,POS,,,,,,POS,,,, 3730,"Originality - I find the paper to be very incremental in terms of originality of the method.[method-POS], [NOV-POS]",method,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3731,"Quality and Significance - Due to small size of the cohort and lack of additional dataset, it is difficult to reliably access quality of experiments.[experiments-NEU], [EMP-NEG, SUB-NEG]",experiments,,,,,,EMP,SUB,,,,NEU,,,,,,NEG,NEG,,, 3732,"Given that results are reported via cross-validation and without a true held-out dataset, and given that a number of hyperparameters are not even tuned, it is difficult to be confident that the differences of all the methods reported are significant.[results-NEG], [IMP-NEG, EMP-NEG]",results,,,,,,IMP,EMP,,,,NEG,,,,,,NEG,NEG,,, 3733,"Clarity - The writing has good clarity.[writing-POS], [CLA-POS]",writing,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3734,"Major issues with the paper: - Lack of reliable experiment section.[experiment section-NEG], [SUB-NEG, EMP-NEG]",experiment section,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 3735,"Dataset is too small (2000 total samples), and model training is not described with enough details in terms of hyper-parameters tuned. [Dataset-NEG], [SUB-NEG]",Dataset,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3739,"Maybe I misunderstood something, but one big problem I have with the paper is that for a ""causalGAN"" approach it doesn't seem to do much causality.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3740,"The (known) causal graph is only used to model the dependencies of the labels, which the authors call the ""Causal Controller"".[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3741,"On this graph, one can perform interventions and get a different distribution of labels from the original causal graph (e.g. a distribution of labels in which women have the same probability as men of having moustaches).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3742,"Given the labels, the rest of the architecture are extensions of conditional GANs, a causalGAN with a Labeller and an Anti-Labeller (of which I'm not completely sure I understand the necessity) and an extension of a BEGAN.[architecture-NEU], [EMP-NEU]",architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3743,"The results are not particularly impressive, but that is not an issue for me.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3744,"Moreover sometimes the descriptions are a bit imprecise and unstructured.[descriptions-NEG], [PNF-NEG]",descriptions,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3745,"For example, Theorem 1 is more like a list of desiderata and it already contains a forward reference to page 7.[Theorem-NEG, page-NEU], [PNF-NEG]",Theorem,page,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 3746,"The definition of intervention in the Background applies only to do-interventions (Pearl 2009) and not to general interventions (e.g. consider soft, uncertain or fat-hand interventions).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3747,"Overall, I think the paper proposes some interesting ideas,[paper-POS, ideas-POS], [EMP-POS]",paper,ideas,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3748,"but it doesn't explore them yet in detail.[detail-NEG], [SUB-NEG]",detail,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3750,"Moreover, I would be very curious about ways to better integrate causality and generative models, that don't focus only on the label space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3751,"Minor details: Personally I'm not a big fan of abusing colons ("":"") instead of points (""."").[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 3753,"EDIT: I read the author's rebuttal, but it has not completely addressed my concerns, so my rating has not changed.[concerns-NEG], [REC-NEG]",concerns,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 3757,"To this reviewer's understanding, the proposed method can be regarded as the extension of the previous work of LAB and TWN, which can be the main contribution of the work.[proposed method-NEU, main contribution-NEU], [IMP-NEU, EMP-NEU]",proposed method,main contribution,,,,,IMP,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 3758,"While the proposed method achieved promising results compared to the competing methods, it is still necessary to compare their computational complexity, which is one of the main concerns in network compression.[proposed method-POS, results-POS], [CMP-POS, EMP-NEU]",proposed method,results,,,,,CMP,EMP,,,,POS,POS,,,,,POS,NEU,,, 3759,"It would be appreciated to have discussion on the results in Table 2, which tells that the performance of quantized networks is better than the full-precision network.[discussion-NEU, results-NEU, Table-NEU, performance-NEU], [SUB-NEU]",discussion,results,Table,performance,,,SUB,,,,,NEU,NEU,NEU,NEU,,,NEU,,,, 3763,"They evaluate the suggested model on synthetic data and outperform the current state of the art in terms of accuracy.[model-NEU, accuracy-NEU], [EMP-NEU, CMP-NEU]",model,accuracy,,,,,EMP,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 3764,"pros - the paper is written in a clear and concise manner[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 3765,"- it suggests an interesting connection between a traditional model and Deep Learning techniques[null], [CMP-POS]",null,,,,,,CMP,,,,,,,,,,,POS,,,, 3767,"cons - please provide the value of the diffusion coefficient for the sake of reproducibility[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 3768,"- medium resolution of the resulting prediction[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3769,"I enjoyed reading this paper and would like it to be accepted.[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 3770,"minor comments: - on page five in the last paragraph there is a left parenthesis missing in the inline formula nabla dot w_t(x))^2.[page-NEG, paragraph-NEG], [PNF-NEG]",page,paragraph,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 3774,"- in the introduction (page two) the authors refer to SST prediction as a 'relatively complex physical modeling problem', whereas in the conclusion (page ten) it is referred to as 'a problem of intermediate complexity'. This seems to be inconsistent.[introduction-NEG, conclusion-NEG], [CLA-NEG]",introduction,conclusion,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 3776,"After reading the rebuttal: This paper does have encouraging results.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3777,"But as mentioned earlier, it still lacks systematic comparisons with existing (and strongest) baselines, and perhaps a better understanding the differences between approaches and the pros and cons.[comparisons-NEG, differences-NEG], [CMP-NEG]",comparisons,differences,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 3778,"The writing also needs to be improved.[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3779,"So I think the paper is not ready for publication and my opinion remains.[paper-NEG, publication-NEG], [REC-NEG]",paper,publication,,,,,REC,,,,,NEG,NEG,,,,,NEG,,,, 3782,"I think the idea of representation learning using a somewhat artificial task makes sense in this setting. I have several concerns for this submission.[idea-NEU, concerns-NEG], [EMP-NEG]",idea,concerns,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 3784,"I think a very related approach that learns the representation using pretty much the same information is the contrastive loss: -- Hermann and Blunsom.[related approach-NEU], [CMP-NEU]",related approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3786,"The intuition is similar: similar pairs shall have higher similarity in the learned representation, than dissimilar pairs, by a large margin.[similar-NEU], [CMP-NEU]",similar,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3787,"This approach is useful even when there is only weak supervision to provide the similarity/dissimilarity information.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3788,"I wonder how does this approach compare with the proposed method.[proposed method-NEU], [CMP-NEU]",proposed method,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3789,"2. The experiments are conducted on a small dataset OMNIGLOT and TIMIT.[dataset-NEU], [SUB-NEU, EMP-NEU]",dataset,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 3790,"I do not understand why the compared methods are not consistently used in both experiments.[compared methods-NEG], [EMP-NEG]",compared methods,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3791,"Also, the experiment of speaker classification on TIMIT (where the inputs are audio segments with different durations and sampling frequency) is a quite nonstandard task; I do not have a sense of how challenging it is.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3792,"It is not clear why CNN transfer learning (the authors did not give details about how it works) performs even worse than the non-deep baseline, yet the proposed method achieves very high accuracy.[baseline-NEU, proposed method-NEU, accuracy-POS], [EMP-POS, CLA-NEG]",baseline,proposed method,accuracy,,,,EMP,CLA,,,,NEU,NEU,POS,,,,POS,NEG,,, 3793,"It would be nice to understand/visualize what information have been extracted in the representation learning phase.[information-NEG], [SUB-NEU]",information,,,,,,SUB,,,,,NEG,,,,,,NEU,,,, 3794,"3. Relatively minor: The writing of this paper is readable, but could be improved.[paper-NEU], [CLA-NEU]",paper,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 3795,"It sometimes uses vague/nonstandard terminology (parameterless) and statement.[terminology-NEG, statement-NEG], [CLA-NEG]",terminology,statement,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 3796,"The term siamese kernel is not very informative: yes, you are learning new representations of data using DNNs, but this feature mapping does not have the properties of RKHS; also you are not solving the SVM dual problem as one typically does for kernel SVMs.[term-NEG, problem-NEG], [EMP-NEG]",term,problem,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3797,"In my opinion the introduction of SVM can be shortened, and more focuses can be put on related deep learning methods and few shot learning.[introduction-NEU, methods-NEU], [SUB-NEU]]",introduction,methods,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 3800,"Some theoretical guarantees for the efficiency of reservoir sampling are provided.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3802,"The comparisons between this episodic approach and recurrent neural net with basic GRU memory show the advantage of proposed algorithm.[comparisons-POS], [CMP-POS]",comparisons,,,,,,CMP,,,,,POS,,,,,,POS,,,, 3803,"The paper is well written and easy to understand.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3804,"Typos didn't influence reading.[Typos-NEU], [CLA-NEU]",Typos,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 3805,"It is a novel setup to consider reservoir sampling for episodic memory.[setup-POS], [NOV-POS]",setup,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3807,"Physical meanings of Theorem 1 are not well represented.[Theorem-NEG], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3808,"What are the theoretical advantages of using reservoir sampling?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3810,"The proposed architecture is only compared with a recurrent baseline with 10-unit GRU network.[proposed architecture-NEU], [SUB-NEU, CMP-NEU]",proposed architecture,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 3811,"It is not clear the better performance comes from reservoir sampling or other differences.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3812,"Moreover, the hyperparameters are not optimized on different architectures. It is hard to justify the empirically better performance without hyperparameter tuning.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3813,"The authors mentioned that the experiments are done on a toy problem, only three repeats for each experiment.The technically soundness of this work is weakened by the experiments.[experiments-NEG], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 3816,"The paper is very well written and quite clear.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3817,"It does a good job of contrasting parameter space noise to action space noise and evolutionary strategies.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3818,"However, the results are weak.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3819,"Parameter noise does better in some Atari + Mujoco domains, but shows little difference in most domains.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3820,"The domains where parameter noise (as well as evolutionary strategies) does really well are Enduro and the Chain environment, in which a policy that repeatedly chooses a particular action will do very well.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3823,"Similarly for the continuous control with sparse rewards environments u2013 if you can construct an environment with sparse enough reward that action-space noise results in zero rewards, then clearly parameter space noise will have a better shot at learning.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3824,"However, for complex domains with sparse reward (e.g. Montezuma's Revenge) parameter space noise is just not going to get you very far.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3825,"Overall, I think parameter space noise is a worthy technique to have analyzed and this paper does a good job doing just that.[paper-POS], [EMP-POS, IMP-POS]",paper,,,,,,EMP,IMP,,,,POS,,,,,,POS,POS,,, 3826,"However, I don't expect this technique to make a large splash in the Deep RL community, mainly because simply adding noise to the parameter space doesn't really gain you much more than policies that are biased towards particular actions.[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 3827,"Parameter noise is not a very smart form of exploration, but it should be acknowledged as a valid alternative to action-space noise.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3834,"Strengths - The proposed model begins with reasonable motivation and shows its effectiveness in experiments clearly.[proposed model-POS, motivation-POS, experiments-POS], [EMP-POS]",proposed model,motivation,experiments,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 3835,"- The architecture of the proposed model looks natural and all components seem to have clear contribution to the model.[architecture-POS], [EMP-POS]",architecture,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3836,"- The proposed model can be easily applied to any VQA model using soft attention.[proposed model-POS], [EMP-POS]",proposed model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3837,"- The paper is well written and the contribution is clear.[paper-POS, contribution-POS], [CLA-POS]",paper,contribution,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 3838,"Weaknesses - Although the proposed model is helpful to model counting information in VQA, it fails to show improvement with respect to a couple of important baselines: prediction from image representation only and from the combination of image representation and attention weights.[proposed model-NEG, improvement-NEG, baselines-NEU], [EMP-NEG]",proposed model,improvement,baselines,,,,EMP,,,,,NEG,NEG,NEU,,,,NEG,,,, 3839,"- Qualitative examples of intermediate values in counting component--adjacency matrix (A), distance matrix (D) and count matrix (C)--need to be presented to show the contribution of each part, especially in the real examples that are not compatible with the strong assumptions in modeling counting component.[assumptions-NEG], [SUB-NEG]",assumptions,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3840,"Comments - It is not clear if the value of count c is same with the final answer in counting questions. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3843,"The paper is rigorous and ideas are clearly stated.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3844,"The idea to constraint the dimension reduction to fit a certain model, here a GMM, is relevant, and the paper provides a thorough comparison with recent state-of-the-art methods.[idea-POS, comparison-POS], [CMP-POS, EMP-POS]",idea,comparison,,,,,CMP,EMP,,,,POS,POS,,,,,POS,POS,,, 3845,"My main concern is that the method is called unsupervised, but it uses the class information in the training, and also evaluation.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3846,"I'm also not convinced of how well the Gaussian model fits the low-dimensional representation and how well can a neural network compute the GMM mixture memberships.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3847,"1. The framework uses the class information, i.e., ""only data samples from the normal class are used for training"", but it is still considered unsupervised.[framework-NEU], [EMP-NEG]",framework,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3848,"Also, the anomaly detection in the evaluation step is based on a threshold which depends on the percentage of known anomalies, i.e., a priori information.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3849,"I would like to see a plot of the sample energy as a function of the number of data points.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3850,"Is there an elbow that indicates the threshold cut?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3851,"Better yet it would be to use methods like Local Outlier Factor (LOF) (Breunig et al., 2000 u2013 LOF:Identifying Density-based local outliers) to detect the outliers (these methods also have parameters to tune, sure, but using the known percentage of anomalies to find the threshold is not relevant in a purely unsupervised context when we don't know how many anomalies are in the data).[null], [CMP-NEU, EMP-NEU]",null,,,,,,CMP,EMP,,,,,,,,,,NEU,NEU,,, 3852,"2. Is there a theoretical justification for computing the mixture memberships for the GMM using a neural network?[theoretical justification-NEU], [EMP-NEU]",theoretical justification,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3853,"3. How do the regularization parameters lambda_1 and lambda_2 influence the results?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3854,"4. The idea to jointly optimize the dimension reduction and the clustering steps was used before neural nets (e.g., Yang et al., 2014 - Unsupervised dimensionality reduction for Gaussian mixture model).[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3855,"Those approaches should at least be discussed in the related work, if not compared against.[approaches-NEG], [SUB-NEG, CMP-NEG]",approaches,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 3856,"5. The authors state that estimating the mixture memberships with a neural network for GMM in the estimation network instead of the standard EM algorithm works better.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3857,"Could you provide a comparison with EM?[comparison-NEU], [CMP-NEU]",comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 3858,"6. In the newly constructed space that consists of both the extracted features and the representation error, is a Gaussian model truly relevant? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3859,"Does it well describe the new space?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3860,"Do you normalize the features (the output of the dimension reduction and the representation error are quite different)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3861,"Fig. 3a doesn't seem to show that the output is a clear mixture of Gaussians.[Fig-NEU], [EMP-NEG]",Fig,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3862,"7. The setup of the KDDCup seems a little bit weird, where the normal samples and anomalies are reversed (because of percentage), where the model is trained only on anomalies, and it detects normal samples as anomalies[setup-NEG], [EMP-NEG]",setup,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3863,"... I'm not convinced that it is the best example, especially that is it the one having significantly better results, i.e. scores ~ 0.9 vs. scores ~0.4/0.5 score for the other datasets.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3865,"..."" - it is not clear to me, it does look better than the other ones, but not clear.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 3866,"If there is a clear separation from a different view, show that one instead.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 3867,"We don't need the same view for all methods.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3868,"9. In the experiments the reduced dimension used is equal to 1 for two of the experiments and 2 for one of them. This seems very drastic![experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3869,"Minor comments: 1. Fig.1: what dimension reduction did you use? Add axis labels. 2.[Fig-NEU], [EMP-NEU]",Fig,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3870,"""DAGMM preserves the key information of an input sample"" - what does key information mean?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3871,"3. In Fig. 3 when plotting the results for KDDCup, I would have liked to see results for the best 4 methods from Table 1, OC-SVM performs better than PAE.[Fig-NEU, results-NEU], [SUB-NEU]",Fig,results,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 3872,"Also DSEBM-e and DSEBM-r seems to perform very well when looking at the three measures combined.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3873,"They are the best in terms of precision.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 3874,"4. Is the error in Table 2 averaged over multiple runs? If yes, how many?[error-NEU, Table-NEU], [EMP-NEU]",error,Table,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3875,"Quality u2013 The paper is thoroughly written, and the ideas are clearly presented.[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 3876,"It can be further improved as mentioned in the comments.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 3877,"Clarity u2013 The paper is very well written with clear statements, a pleasure to read.[paper-POS, statements-POS], [CLA-POS, PNF-POS]",paper,statements,,,,,CLA,PNF,,,,POS,POS,,,,,POS,POS,,, 3878,"Originality u2013 Fairly original, but it still needs some work to justify it better.[original-NEU], [NOV-NEU, EMP-NEU]",original,,,,,,NOV,EMP,,,,NEU,,,,,,NEU,NEU,,, 3879,"Significance u2013 Constraining the dimension reduction to fit a certain model is a relevant topic, but I'm not convinced of how well the Gaussian model fits the low-dimensional representation and how well can a neural network compute the GMM mixture memberships. [Significance-NEU], [IMP-NEU]",Significance,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 3882,"Comments: 1. Using expectation to explain why DReLU works well is not sufficient and convincing.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3883,"Although DReLU's expectation is smaller than expectation of ReLU, but it doesn't explain why DReLU is better than very leaky ReLU, ELU etc.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3884,"2. CIFAR-10/100 is a saturated dataset and it is not convincing DReLU will perform will on complex task, such as ImageNet, object detection, etc.[dataset-NEG], [EMP-NEG]",dataset,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3885,"3. In all experiments, ELU/LReLU are worse than ReLU, which is suspicious.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3887,"Overall, I don't think this paper meet ICLR's novelty standard, although the authors present some good numbers, but they are not convincing. [paper-NEG], [APR-NEG, NOV-NEG]",paper,,,,,,APR,NOV,,,,NEG,,,,,,NEG,NEG,,, 3894,"---------- OVERALL JUDGMENT The paper presents a clever use of VAEs for generating entity pairs conditioning on relations.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3895,"My main concern about the paper is that it seems that the authors have tuned the hyperparameters and tested on the same validation set.[paper-NEU], [EMP-NEG]",paper,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3896,"If this is the case, all the analysis and results obtained are almost meaningless.[analysis-NEG, results-NEG], [EMP-NEG]",analysis,results,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 3897,"I suggest the authors make clear if they used the split training, validation, test.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3898,"Until then it is not possible to draw any conclusion from this work.[conclusion-NEG, work-NEU], [EMP-NEG]",conclusion,work,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 3899,"Assuming the experimental setting is correct, it is not clear to me the reason of having the representation of r (one-hot-vector of the relation) also in the decoding/generation part.[experimental setting-NEU], [EMP-NEG]",experimental setting,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 3900,"The hidden representation obtained by the encoder should already capture information about the relation.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3901,"Is there a specific reason for doing so? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3904,"The basic observation (for SGD) is that if theta_{t+1} theta_t - alpha abla f(theta_t), then partial/partialalpha f(theta_{t+1}) -< abla f(theta_t), abla f(theta_{t+1})>, i.e. that the negative inner product of two successive stochastic gradients is equal in expectation to the derivative of the tth update w.r.t. the learning rate alpha.[observation-NEU], [EMP-NEU]",observation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3905,"I have seen this before for SGD (the authors do not claim that the basic idea is novel), but I believe that the application to other algorithms (the authors explicitly consider Nesterov momentum and ADAM) are novel, as is the use of the multiplicative and normalized update of equation 8 (particularly the normalization).[application-POS], [NOV-POS]",application,,,,,,NOV,,,,,POS,,,,,,POS,,,, 3906,"The experiments are well-presented, and appear to convincingly show a benefit.[experiments-POS], [PNF-POS, EMP-POS]",experiments,,,,,,PNF,EMP,,,,POS,,,,,,POS,POS,,, 3907,"Figure 3, which explores the robustness of the algorithms to the choice of alpha_0 and beta, is particularly nicely-done, and addresses the most natural criticism of this approach (that it replaces one hyperparameter with two).[Figure-POS, algorithms-NEU], [PNF-POS, EMP-POS]",Figure,algorithms,,,,,PNF,EMP,,,,POS,NEU,,,,,POS,POS,,, 3908,"The authors highlight theoretical convergence guarantees as an important future work item, and the lack of them here (aside from Theorem 5.1, which just shows asymptotic convergence if the learning rates become sufficiently small) is a weakness, but not, I think, a critical one.[future work-NEU], [IMP-POS, EMP-NEU]",future work,,,,,,IMP,EMP,,,,NEU,,,,,,POS,NEU,,, 3909,"This appears to be a promising approach, and bringing it back to the attention of the machine learning community is valuable.[approach-POS], [IMP-POS]",approach,,,,,,IMP,,,,,POS,,,,,,POS,,,, 3912,"The paper derives efficient approximations for the spectral norm, as well as an analysis of its gradient.[paper-POS, analysis-POS], [EMP-POS]",paper,analysis,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3913,"Experimental results on CIFAR-10 and STL-10 show improved Inception scores and FID scores using this method compared to other baselines and other weight normalization methods.[Experimental results-POS, method-POS, baselines-NEU], [CMP-POS, EMP-NEU]",Experimental results,method,baselines,,,,CMP,EMP,,,,POS,POS,NEU,,,,POS,NEU,,, 3914,"Overall, this is a well-written paper that tackles an important open problem in training GANs using a well-motivated and relatively simple approach.[paper-POS, problem-NEU, approach-POS], [CLA-POS, EMP-POS]",paper,problem,approach,,,,CLA,EMP,,,,POS,NEU,POS,,,,POS,POS,,, 3915,"The experimental results seem solid and seem to support the authors' claims.[experimental results-POS, claims-POS], [EMP-POS]",experimental results,claims,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 3916,"I agree with the anonymous reviewer that connections (and differences) to related work should be made clearer.[related work-NEU], [CMP-NEG]",related work,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 3917,"Like the anonymous commenter, I also initially thought that the proposed spectral normalization is basically the same as spectral norm regularization, but given the authors' feedback on this I think the differences should be made more explicit in the paper.[differences-NEG], [CMP-NEG]",differences,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3918,"Overall this seems to represent a strong step forward in improving the training of GANs, and I strongly recommend this paper for publication.[paper-POS], [REC-POS, IMP-POS]",paper,,,,,,REC,IMP,,,,POS,,,,,,POS,POS,,, 3919,"Small Nits: Section 4: In order to evaluate the efficacy of our experiment: I think you mean approach.[Section-NEU], [CLA-NEG]",Section,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 3920,"There are a few colloquial English usages which made me smile, e.g. * Sec 4.1.1. As we prophesied ..., and in the paragraph below * ... is a tad slower ....[Sec-NEG], [CLA-NEG]",Sec,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3924,". Pros: - The model achieves the state of the art in bAbI QA and dialog. I think this is a significant achievement given the simplicity of the model.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3925,"- The paper is clearly written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 3926,"Cons: - I am not sure what is novel in the proposed model.[proposed model-NEU], [NOV-NEU]",proposed model,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 3927,"While the authors use notations used in Relation Network (e.g. 'g'), I don't see any relevance to Relation Network.[notations-NEG], [PNF-NEG]",notations,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 3928,"Rather, this exactly resembles End-to-end memory network (MemN2N) and GMemN2N.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3929,"Please tell me if I am missing something, but I am not sure of the contribution of the paper.[contribution-NEG], [IMP-NEG]",contribution,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 3930,"Of course, I notice that there are small architectural differences, but if these are responsible for the improvements, I believe the authors should have conducted ablation study or qualitative analysis that show that the small tweaks are meaningful.[qualitative analysis-NEU], [EMP-NEU, SUB-NEU]",qualitative analysis,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 3931,"Question: - What is the exact contribution of the paper with respect to MemN2N and GMemN2N?[contribution-NEU], [NOV-NEU, CMP-NEU]",contribution,,,,,,NOV,CMP,,,,NEU,,,,,,NEU,NEU,,, 3936,"Planning lane-change maneuvers is an interesting, important problem for self-driving vehicles.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3937,"What makes this problem particularly challenging is the need to predict/respond to the actions of other drivers.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3938,"However, these issues are ignored here, and it is is unclear why existing optimization/planning approaches are poorly suited to this problem, which is a fundamental assumption being made here. [issues-NEG, assumption-NEG], [SUB-NEG, CMP-NEG]",issues,assumption,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3940,"However, the related work discussion is significantly lacking.[related work discussion-NEG], [SUB-NEG]",related work discussion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3941,"The paper does an insufficient job describing why deep RL is the right way to formulate this problem.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3942,"There are vague references to the policy being difficult to define, but that motivates the importance of learning in general, not deep RL.[references-NEG], [CMP-NEG]",references,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 3944,"(ii) the large amount of data required to learn the policy; [data-POS], [EMP-NEG]",data,,,,,,EMP,,,,,POS,,,,,,NEG,,,, 3946,"One can see the merits in employing a hierarchical action space, whereby decision making operates over high-level actions, each associated with low-level controllers, but that the adopted formulation is not fundamental to this abstraction.[hierarchical action space-NEU], [EMP-NEU]",hierarchical action space,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3947,"Indeed, this largely regulates the hard problems (i.e., controlling the low-level actions of the vehicle while avoiding collisions) to a separate controller.[problems-NEU], [EMP-NEU]",problems,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3948,"Further, Q-masking largely amounts to simply removing actions that are infeasible (e.g., changing lanes to the left when in the left-most lane), but is seems to be no more than a heuristic, the advantages of which are not evaluated.[advantages-NEG], [EMP-NEU]",advantages,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 3949,"The method is evaluated in simulation with comparisons to a simple baseline that tries to get over to the right lane as well as human performance.[method-NEU, baseline-NEU], [EMP-NEU]",method,baseline,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3950,"In the runs that reach the goal, the proposed method is about 20% faster than the simple baseline, though it does not reach the goal every time.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3951,"Given the claim that not reaching the goal is considered a failure, it isn't clear which performance is preferred.[claim-NEU, performance-NEG], [EMP-NEG]",claim,performance,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 3952,"Meanwhile, the evaluation could be improved with the use of a better baseline (e.g., using an existing planning framework such as a predictive RRT that plans to the goal).[evaluation-NEU, baseline-NEU], [CMP-NEG]",evaluation,baseline,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 3953,"Additional comments/questions: * The description of the Q-learning implementation is unclear.[description-NEG], [EMP-NEG]",description,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3954,"How is the terminal time known a priori?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3955,"Why are two buffers necessary?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3956,"* The paper claims that the method permits training without any collisions, even for real training runs (strong claim), however it isn't clear how this is guaranteed beyond the assumption that you have a low-level controller that can ensure collisions are avoided.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3957,"This is secondary to the proposed framework.[proposed framework-NEU], [EMP-NEU]",proposed framework,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3959,"The authors should validate these claims with an ablation study that compares performance with and without masking.[ablation study-NEG], [SUB-NEG]",ablation study,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 3962,"How sensitive is the network to errors in this model?[errors-NEU, model-NEU], [EMP-NEU]",errors,model,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 3963,"Does the occupancy grid account for sensing limitations (e.g., occlusions)? [limitations-NEU], [EMP-NEU]",limitations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3969,"The paper demonstrates improvements in a number of public datasets.[paper-POS, improvements-POS, public datasets-NEU], [EMP-POS]",paper,improvements,public datasets,,,,EMP,,,,,POS,POS,NEU,,,,POS,,,, 3970,"Careful reporting of the tuning and hyperparameter choices renders these experiments repeatable, and hence a suitable improvement in the field.[experiments-POS], [EMP-POS, IMP-POS]",experiments,,,,,,EMP,IMP,,,,POS,,,,,,POS,POS,,, 3971,"Well-designed ablation studies demonstrate the importance of the architectural choices made, which are generally well-motivated in intuitions about the nature of anomaly detection.[ablation studies-POS], [EMP-POS]",ablation studies,,,,,,EMP,,,,,POS,,,,,,POS,,,, 3972,"Criticisms Based on the performance of GMM-EN, the reconstruction error features are crucial to the success of this method.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 3973,"Little to no detail about these features is included.[detail-NEG, features-NEU], [SUB-NEG]",detail,features,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 3974,"Intuitively, the estimation network is given the latent code conditioned and some (probably highly redundant) information about the residual structure remaining to be modeled.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3975,"Since this is so important to the results, more analysis would be helpful.[results-NEU, analysis-NEU], [SUB-NEU]",results,analysis,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 3976,"Why did the choices that were made in the paper yield this success?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3977,"How do you recommend other researchers or practitioners selected from the large possible space of reconstruction features to get the best results?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3978,"Quality This paper does not set out to produce a novel network architecture.[paper-NEG, network architecture-NEG], [NOV-NEG]",paper,network architecture,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 3979,"Perhaps the biggest innovation is the use of reconstruction error features as input to a subnetwork that predicts the E-step output in EM for a GMM.[innovation-POS], [EMP-NEU, NOV-POS]",innovation,,,,,,EMP,NOV,,,,POS,,,,,,NEU,POS,,, 3980,"This is interesting and novel enough in my opinion to warrant publication at ICLR, along with the strong performance and careful reporting of experimental design. [performance-POS, experimental design-POS], [NOV-POS, EMP-POS, APR-POS]",performance,experimental design,,,,,NOV,EMP,APR,,,POS,POS,,,,,POS,POS,POS,, 3985,"The proposed assumptions are not well motivated and seem arbitrary.[assumptions-NEG], [EMP-NEG]",assumptions,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3986,"Why is using a permutation of each pixels' color a good idea?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 3987,"The paper is very hard to read.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 3988,"The message is unclear and the experiments to prove it are of very limited scope,;[experiments-NEG], [EMP-NEG, IMP-NEG]",experiments,,,,,,EMP,IMP,,,,NEG,,,,,,NEG,NEG,,, 3989,"i.e. one small dataset with the only experiment purportedly showing generalization to red cars.[dataset-NEG, experiment-NEG], [SUB-NEG, EMP-NEG]",dataset,experiment,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3990,"Some examples of specific issues: - the abstract is almost incomprehensible and it is not clear what the contributions are[abstract-NEG, contributions-NEG], [CLA-NEG, IMP-NEG]",abstract,contributions,,,,,CLA,IMP,,,,NEG,NEG,,,,,NEG,NEG,,, 3991,"- Some references to Figures are missing the figure number, eg. 3.2 first paragraph,[references-NEG, Figures-NEG], [PNF-NEG]",references,Figures,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 3992,"- It is not clear how many input channels the color invariant functions use, eg. p1 does it use only one channel and hence has fewer parameters?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3993,"- are the training and testing sets all disjoint (sec 4.3)? - at random points figures are put in the appendix, even though they are described in the paper and seem to show key results (eg tested on nored-test)[figures-NEG], [EMP-NEU, PNF-NEU]",figures,,,,,,EMP,PNF,,,,NEG,,,,,,NEU,NEU,,, 3994,"- Sec 4.6: The explanation for why the accuracy drops for all models is not clear.[accuracy-NEG], [EMP-NEG]",accuracy,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3996,"If that's the case the whole experimental setup seems flawed.[experimental setup-NEG], [EMP-NEG]",experimental setup,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 3997,"- Sec 4.6: the authors refer to the order net beating the baseline, however, from Fig 8 (right most) it appears as if all models beat the baseline.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 3998,"In the conclusion they say that weighted order net beats the baseline on all three test sets w/o red cars in the training set. [conclusion-NEU], [EMP-NEU]",conclusion,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4000,"The baseline seems to be best performing on all cars and on-red cars In order to be at an appropriate level for any publication the experiments need to be much more general in scope. [experiments-NEU], [EMP-NEU, REC-NEG]",experiments,,,,,,EMP,REC,,,,NEU,,,,,,NEU,NEG,,, 4004,"The results show that this PLAID algorithm outperforms a network trained on all tasks simultaneously.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4005,"Questions: - When distilling the policies, do you start from a randomly initialized policy, or do you start from the expert policy network?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4006,"- What data do you use for the distillation?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4007,"Section 4.1 statesWe use a method similar to the DAGGER algorithm, but what is your method.[Section-NEU, method-NEU], [EMP-NEU]",Section,method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4008,"If you generate trajectories form the student network, and label them with the expert actions, does that mean all previous expert policies need to be kept in memory?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4009,"- I do not understand the purpose of input injection nor where it is used in the paper. [paper-NEU], [EMP-NEG]",paper,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 4010,"Strengths: - The method is simple but novel.[method-POS], [NOV-POS, EMP-POS]",method,,,,,,NOV,EMP,,,,POS,,,,,,POS,POS,,, 4011,"The results support the method's utility.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4012,"- The testbed is nice; the tasks seem significantly different from each other.[tasks-NEU], [EMP-POS]",tasks,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 4013,"It seems that no reward shaping is used.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4014,"- Figure 3 is helpful for understanding the advantage of PLAID vs MultiTasker.[Figure -POS], [EMP-POS, PNF-POS]",Figure,,,,,,EMP,PNF,,,,POS,,,,,,POS,POS,,, 4015,"Weaknesses: - Figure 2: the plots are too small.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4016,"- Distilling may hurt performance ( Figure 2.d)[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4017,"- The method lacks details (see Questions above)[method -NEU, details-NEG], [SUB-NEG]",method,details,,,,,SUB,,,,,NEU,NEG,,,,,NEG,,,, 4018,"- No comparisons with prior work are provided.[comparisons-NEG], [SUB-NEG, CMP-NEG]",comparisons,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 4019,"The paper cites many previous approaches to this but does not compare against any of them.[paper-NEU, approaches-NEU], [SUB-NEG, CMP-NEG]",paper,approaches,,,,,SUB,CMP,,,,NEU,NEU,,,,,NEG,NEG,,, 4020,"- A second testbed (such as navigation or manipulation) would bring the paper up a notch.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4021,"In conclusion, the paper's approach to multitask learning is a clever combination of prior work.[approach-POS], [EMP-POS, CMP-POS]",approach,,,,,,EMP,CMP,,,,POS,,,,,,POS,POS,,, 4022,"The method is clear[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4023,"but not precisely described.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4024,"The results are promising.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4025,"I think that this is a good approach to the problem that could be used in real-world scenarios.[approach-POS, problem-NEU], [EMP-POS]",approach,problem,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 4026,"With some filling out, this could be a great paper.[paper-NEU], [REC-POS]",paper,,,,,,REC,,,,,NEU,,,,,,POS,,,, 4033,"This article compares their proposed architecture with RNN (GRU with 10 hidden unit) in few toy tasks.[proposed architecture-NEU], [CMP-NEU]",proposed architecture,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4034,"They demonstrate that proposed model could work better and rational of write network could be observed.[proposed model-NEU], [EMP-NEU]",proposed model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4035,"However, it seems that hyper-parameters for RNN haven't been tuned enough.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4036,"It is because the toy task author demonstrates is actually quite similar to copy tasks, that previous state should be remembered.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4037,"To my knowledge, copy task could be solved easily for super long sequence through RNN model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4038,"Therefore, empirically, it is really hard to justify whether this proposed method could work better.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4039,"Also, intuitively, this episodic memory method should work better on long-term dependencies task, while this article only shows the task with 10 timesteps.[method-NEU, article-NEU], [EMP-NEU, SUB-NEU]",method,article,,,,,EMP,SUB,,,,NEU,NEU,,,,,NEU,NEU,,, 4040,"According to that, the experiments they demonstrated in this article are not well designed so that the conclusion they made in this article is not robust enough. [experiments-NEG, conclusion-NEG], [EMP-NEG]",experiments,conclusion,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 4044,"This approach is quite standard in learning theory, I am not aware of how original this point of view is within the deep learning community.[approach-NEU], [NOV-NEU, IMP-NEU]",approach,,,,,,NOV,IMP,,,,NEU,,,,,,NEU,NEU,,, 4045,"This is estimated by obtaining values of the norm of the gradient (also naturally linked to the Lipschitz properties of the function) by backpropagation. This is again a natural idea.[idea-NEU], [NOV-NEU]",idea,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 4055,"The paper is overall well-written, and the proposed idea seems interesting.[paper-POS, proposed idea-POS], [CLA-POS, EMP-POS]",paper,proposed idea,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 4056,"However, there are rather little explanations provided to argue for the different modeling choices made, and the intuition behind them.[explanations-NEG], [SUB-NEG]",explanations,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4057,"From my understanding, the idea of subgoal learning boils down to a non-parametric (or kernel) regression where each state is mapped to a subgoal based on its closeness to different states in the expert's demonstration.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4058,"It is not clear how this method would generalize to new situations.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4059,"There is also the issue of keeping tracking of a large number of demonstration states in memory.[issue-NEG], [EMP-NEG]",issue,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4060,"This technique reminds me of some common methods in learning from demonstrations, such as those using GPs or GMMs, but the novelty of this technique is the fact that the subgoal mapping function is learned in an IRL fashion, by tacking into account the sum of surrogate rewards in the expert's demonstration.[novelty-POS], [NOV-POS]",novelty,,,,,,NOV,,,,,POS,,,,,,POS,,,, 4061,"The architecture of the action value estimator does not seem novel, it's basically just an extension of DQN with an extra parameter (subgoal g).[architecture-NEG], [NOV-NEG]",architecture,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4062,"The empirical evaluation seems rather mixed.[empirical evaluation-NEG], [EMP-NEG]",empirical evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4063,"Figure 3 shows that the proposed method learns faster than DQN,[Figure-NEU, proposed method-POS], [CMP-POS]",Figure,proposed method,,,,,CMP,,,,,NEU,POS,,,,,POS,,,, 4064,"but Table I shows that the improvement is not statistically significant, except in two games, DefendCenter and PredictPosition.[Table-NEU, improvement-NEG], [EMP-NEG]",Table,improvement,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 4065,"Are these the results after all agents had converged?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4067,"but focusing on only a single game (Doom) is a weakness that needs to be addressed because one cannot tell if the choices were tailored to make the method work well for this game.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4068,"Since the paper does not provide significant theoretical or algorithmic contribution, at least more realistic and diverse experiments should be performed. [contribution-NEG, experiments-NEU], [SUB-NEG, EMP-NEG]",contribution,experiments,,,,,SUB,EMP,,,,NEG,NEU,,,,,NEG,NEG,,, 4073,"Importantly, the model achieves state-of-the-art performance of the SQuAD dataset.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4074,"The paper is very well-written and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4075,"I found the architecture very intuitively laid out, even though this is not my area of expertise.[architecture-POS], [PNF-POS, EMP-POS]",architecture,,,,,,PNF,EMP,,,,POS,,,,,,POS,POS,,, 4076,"Moreover, I found the figures very helpful -- the authors clearly took a lot of time into clearly depicting their work![figures-POS, work-NEU], [PNF-POS]",figures,work,,,,,PNF,,,,,POS,NEU,,,,,POS,,,, 4077,"What most impressed me, however, was the literature review.[literature review-POS], [CMP-POS]",literature review,,,,,,CMP,,,,,POS,,,,,,POS,,,, 4079,"Nevertheless, I am not used to seeing comparison to as many recent systems as are presented in Table 2.[comparison-POS, Table-POS], [CMP-NEU]",comparison,Table,,,,,CMP,,,,,POS,POS,,,,,NEU,,,, 4080,"All in all, it is difficult not to highly recommend an architecture that achieves state-of-the-art results on such a popular dataset.[architecture-POS, results-POS, dataset-NEU], [REC-POS]",architecture,results,dataset,,,,REC,,,,,POS,POS,NEU,,,,POS,,,, 4087,"Their experiments show that this improve technique can produce complete training sets for three programs.[experiments-NEU, technique-POS], [EMP-POS]",experiments,technique,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 4088,"It is nice to see the application of ideas from different areas for learning-related questions.[ideas-POS], [EMP-POS]",ideas,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4089,"However, there is one thing that bothers me again and again. Why do we need a data-generation technique in the paper at all?[technique-NEU], [EMP-NEU]",technique,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4090,"Typically, we are given a set of data, not an oracle that can generate such data, and our task is to learn something from the data.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4091,"If we have an executable oracle, it is now clear to me why we want to replicate this oracle by an instance of the neural programmer-interpreter.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4092,"One thing that I can see is that the technique in the paper can be used when we do research on the neural programmer-interpreter.[technique-NEU], [EMP-NEU]",technique,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4094,"The authors' technique may let us do this data-generation easily.[technique-POS], [EMP-POS]",technique,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4095,"But this benefit to the researchers does not seem to be strong enough for the acceptance at ICLR'18. [acceptance-NEG], [APR-NEG, REC-NEG]",acceptance,,,,,,APR,REC,,,,NEG,,,,,,NEG,NEG,,, 4098,"Authors experiment with the proposed architecture on a set of synthetic toy tasks and a few Starcraft combat levels, where they find their approach to perform better than baselines.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4099,"Overall, I had a very confusing feeling when reading the paper.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4100,"u2028First, authors do not formulate what exactly is the problem statement for MARL.[problem statement-NEG], [EMP-NEG]",problem statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4101,"Is it an MDP or poMDP? [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4102,"How do different agents perceive their time, is it synchronized or not?[agents-NEU], [EMP-NEG]",agents,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 4103,"Do they (partially) share the incentive or may have completely arbitrary rewards?[rewards-NEU], [EMP-NEU]",rewards,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4104,"What is exactly the communication protocol?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4105,"I find this question especially important for MARL, because the assumption on synchronous and noise-free communication, including gradients is too strong to be useful in many practical tasks.[assumption-NEG], [EMP-NEG]",assumption,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4106,"Second, even though the proposed architecture proved to perform empirically better that the considered baselines, the extent to which it advances RL research is unclear to me.[proposed architecture-NEG], [EMP-NEG]",proposed architecture,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4107,"Currently, it looks Based on that, I can't recommend acceptance of the paper.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 4108,"To make the paper stronger and justify importance of the proposed architecture, I suggest authors to consider relaxing assumptions on the communication protocol to allow delayed and/or noisy communication (including gradients).[paper-NEU, proposed architecture-NEU, assumptions-NEU], [EMP-NEU]",paper,proposed architecture,assumptions,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 4109,"It would be also interesting to see if the network somehow learns an implicit global state representation used for planning and how is the developed plan changed when new information from one of the slave agents arrives.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4113,"Although, the observations are interesting, especially the one on MNIST where the network performs well even with correct labels slightly above chance, the overall contributions are incremental.[observations-POS, contributions-NEG], [EMP-POS]",observations,contributions,,,,,EMP,,,,,POS,NEG,,,,,POS,,,, 4115,"Agreed that the authors do a more detailed study on simple MNIST classification, but these insights are not transferable to more challenging domains.[study-POS, insights-NEG], [EMP-NEG]",study,insights,,,,,EMP,,,,,POS,NEG,,,,,NEG,,,, 4116,"The main limitation of the paper is proposing a principled way to mitigate noise as done in Sukhbataar et.al. (2014), or an actionable trade-off between data acquisition and training schedules.[limitation-NEG], [CMP-NEG, EMP-NEG]",limitation,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEG,,, 4117,"The authors contend that the way they deal with noise (keeping number of training samples constant) is different from previous setting which use label flips.[setting-NEU], [EMP-NEU]",setting,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4118,"However, the previous settings can be reinterpreted in the authors setting.[settings-NEU], [EMP-NEU]",settings,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4119,"I found the formulation of the alpha to be non-intuitive and confusing at times.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4122,"This can be improved to help readers understand better.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4123,"There are several unanswered questions as to how this observation transfers to a semi-supervised or unsupervised setting, and also devise architectures depending on the level of expected noise in the labels.[questions-NEG, architectures-NEU], [SUB-NEG]",questions,architectures,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 4124,"Overall, I feel the paper is not up to mark and suggest the authors devote using these insights in a more actionable setting.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 4125,"Missing citation: Training Deep Neural Networks on Noisy Labels with Bootstrapping, Reed et al. [citation-NEG], [SUB-NEG]",citation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4127,"This is a well-written paper with good comparisons to a number of earlier approaches.[paper-POS, earlier approaches-POS], [CLA-POS, CMP-POS]",paper,earlier approaches,,,,,CLA,CMP,,,,POS,POS,,,,,POS,POS,,, 4128,"It focuses on an approach to get similar accuracy at lower precision, in addition to cutting down the compute costs.[approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4129,"Results with 2-bit activations and 4-bit weights seem to match baseline accuracy across the models listed in the paper.[Results-NEU, baseline results-NEU], [EMP-NEU]",Results,baseline results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4130,"Originality This seems to be first paper that consistently matches baseline results below int-8 accuracy, and shows a promising future direction.[baseline results-POS], [NOV-POS, IMP-POS]",baseline results,,,,,,NOV,IMP,,,,POS,,,,,,POS,POS,,, 4131,"Significance Going down to below 8-bits and potentially all the way down to binary (1-bit weights and activations) is a promising direction for future hardware design.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 4132,"It has the potential to give good results at lower compute and more significantly in providing a lower power option, which is the biggest constraint for higher compute today.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4133,"Pros: - Positive results with low precision (4-bit, 2-bit and even 1-bit)[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4134,"- Moving the state of the art in low precision forward[null], [CMP-POS]",null,,,,,,CMP,,,,,,,,,,,POS,,,, 4135,"- Strong potential impact, especially on constrained power environments (but not limited to them)[impact-POS], [IMP-POS]",impact,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4136,"- Uses same hyperparameters as original training, making the process of using this much simpler.[process-POS], [NOV-POS, EMP-POS]",process,,,,,,NOV,EMP,,,,POS,,,,,,POS,POS,,, 4137,"Cons/Questions - They mention not quantizing the first and last layer of every network.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4138,"How much does that impact the overall compute?[outcome-NEU], [EMP-NEU]",outcome,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4139,"- Is there a certain width where 1-bit activation and weights would match the accuracy of the baseline model?[accuracy-NEU, baseline model-NEU], [CMP-NEU]",accuracy,baseline model,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 4140,"This could be interesting for low power case, even if the effective compute is larger than the baseline.[baseline-POS], [EMP-POS]]",baseline,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4142,"2. Lacks in sufficient machine learning related novelty required to be relevant in the main conference[novelty-NEU], [APR-NEU, NOV-NEU]",novelty,,,,,,APR,NOV,,,,NEU,,,,,,NEU,NEU,,, 4143,"3. Design, solving inverse problem using Deep Learning are not quite novel, see Stoecklein et al. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data. Scientific Reports 7, Article number: 46368 (2017).[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 4144,"4. However, this paper introduces two different types of networks for parametrization and physical behavior mapping, which is interesting, can be very useful as surrogate models for CFD simulations.[paper-POS], [IMP-POS, EMP-POS]",paper,,,,,,IMP,EMP,,,,POS,,,,,,POS,POS,,, 4145,"5. It will be interesting to see the impacts of physics based knowledge on choice of network architecture, hyper-parameters and other training considerations.[impacts-NEU], [IMP-NEU]",impacts,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 4146,"6. Just claiming the generalization capability of deep networks is not enough, need to show how much the model can interpolate or extrapolate?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4147,"what are the effects of regulariazations in this regard? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4154,"Positives: - The three properties of visual concepts described in the paper are interesting.[concepts-POS], [EMP-POS]",concepts,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4155,"Negatives: - The novelty of the paper is limited.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4156,"The idea of visual concept has been proposed in Wang et al. 2015.[idea-NEG], [NOV-NEG]",idea,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4157,"Using a embedding representation based on visual concepts is straightforward.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4158,"The two baseline methods for few-shot learning provide limited insights in solving the few-shot learning problem.[baseline methods-NEG, problem-NEG], [SUB-NEG]",baseline methods,problem,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 4159,"- The paper uses a hard thresholding in the visual concept embedding.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4160,"It would be interesting to see the performance of other strategies in computing the embedding, such as directly using the distances without thresholding.[strategies-NEG], [SUB-NEG]]",strategies,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4164,"The novelty of the attack is a bit dim, since it seems it's just the straightforward attack against the region cls defense.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4165,"The authors fail to include the most standard baseline attack, namely FSGM.[baseline-NEG], [CMP-NEG]",baseline,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4166,"The authors also miss the most standard defense, training with adversarial examples.[null], [SUB-NEG, EMP-NEG]",null,,,,,,SUB,EMP,,,,,,,,,,NEG,NEG,,, 4167,"As well, the considered attacks are in L2 norm, and the distortion is measured in L2, while the defenses measure distortion in L_infty (see detailed comments for the significance of this if considering white-box defenses).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4168,"The provided analysis is insightful,[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4169,"though the authors mostly fail to explain how this analysis could provide further work with means to create new defenses or attacks.[analysis-NEG], [IMP-NEG]",analysis,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 4170,"If the authors add FSGM to the batch of experiments (especially section 4.1) and address some of the objections I will consider updating my score.[experiments-NEU], [EMP-NEU, REC-NEU]",experiments,,,,,,EMP,REC,,,,NEU,,,,,,NEU,NEU,,, 4172,"Detailed comments: - I think the novelty of the attack is not very strong.[novelty-NEU], [NOV-NEU]",novelty,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 4174,"Designing an attack for a specific defense is very well established in the literature, and the fact that the attack fools this specific defense is not surprising.[literature-NEU], [EMP-NEU]",literature,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4175,"- I think the authors should make a claim on whether their proposed attack works only for defenses that are agnostic to the attack (such as PGD or region based), or for defenses that know this is a likely attack (see the following comment as well).[claim-NEU], [EMP-NEU]",claim,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4176,"If the authors want to make the second claim, training the network with adversarial examples coming from OptMargin is missing.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 4177,"- The attacks are all based in L2, in the sense that the look for they measure perturbation in an L2 sense (as the paper evaluation does), while the defenses are all L_infty based (since the region classifier method samples from a hypercube, and PGD uses an L_infty perturbation limit).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4178,"This is very problematic if the authors want to make claims about their attack being effective under defenses that know OptMargin is a possible attack.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4179,"- The simplest most standard baseline of all (FSGM) is missing.[baseline-NEG], [CMP-NEG]",baseline,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4180,"This is important to compare properly with previous work.[previous work-NEG], [CMP-NEG]",previous work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4181,"- The fact that the attack OptMargin is based in L2 perturbations makes it very susceptible to a defense that backprops through the attack.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4182,"This and / or the defense of training to adversarial examples is an important experiment to assessing the limitations of the attack.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4183,"- I think the authors rush to conclude that a small ball around a given input distance can be misleading.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4184,"Wether balls are in L2 or L_infty, or another norm makes a big difference in defense and attacks, given that they are only equivalent to a multiplicative factor of sqrt(d) where d is the dimension of the space, and we are dealing with very high dimensional problems.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4185,"I find the analysis made by the authors to be very simplistic.[analysis-NEG], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4186,"- The analysis of section 4.1 is interesting, it was insightful and to the best of my knowledge novel.[analysis-POS, section-NEU], [EMP-POS]",analysis,section,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 4187,"Again I would ask the authors to make these plots for FSGM.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4188,"Since FSGM is known to be robust to small random perturbations, I would be surprised that for a majority of random directions, the adversarial examples are brought back to the original class.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4189,"- I think a bit more analysis is needed in section 4.2.[analysis-NEU, section-NEU], [SUB-NEU]",analysis,section,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 4190,"Do the authors think that this distinguishability can lead to a defense that uses these statistics?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4192,"- I think the analysis of section 5 is fairly trivial. [analysis-NEG, section-NEU], [EMP-NEG]",analysis,section,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 4195,"Minor comments: - The justification of why OptStrong is missing from Table2 (last three sentences of 3.3) should be summarized in the caption of table 2 (even just pointing to the text), otherwise a first reader will mistake this for the omission of a baseline.[justification-NEU, table-NEU], [PNF-NEG]",justification,table,,,,,PNF,,,,,NEU,NEU,,,,,NEG,,,, 4196,"- I think it's important to state in table 1 what is the amount of distortion noticeable by a human.[table-NEU], [EMP-NEU]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4197,"After the rebuttal I've updated my score, due to the addition of FSGM added as a baseline and a few clarifications.[score-POS], [REC-POS]",score,,,,,,REC,,,,,POS,,,,,,POS,,,, 4199,"I still think the novelty, significance of the claims and protocol are still perhaps borderline for publication (though I'm leaning towards acceptance),[novelty-NEU, significance-NEU], [REC-NEU, IMP-NEU, NOV-NEU]",novelty,significance,,,,,REC,IMP,NOV,,,NEU,NEU,,,,,NEU,NEU,NEU,, 4205,"It seems like the method could be more informative than the other methods.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4206,"However, there are quite a number of problems, as explained below. * The explanation of eqs 1 and 2 is quite poor.[explanation-NEG, eqs-NEG], [CLA-NEG]",explanation,eqs,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 4208,"Could we not also apply this to negative examples?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4209,"or in the case of heart failure, predicted BNP level -- this doesn't make sense to me -- surely it would be necessary to target an adjusted BNP level?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4210,"Also specific details should be reserved until a general explanation of the problem has been made.[details-NEU, problem-NEU], [EMP-NEU]",details,problem,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4211,"* The trade-off parameter gamma is a fiddle factor -- how was this set for the lung image and MNIST examples?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4212,"Were these values different?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4215,"* The example of 4/9 misclassification seems very specific. Does this method also work on say 2s and 3s?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4216,"Why have you not reported results for these kinds of tasks?[results-NEU, tasks-NEU], [SUB-NEG]",results,tasks,,,,,SUB,,,,,NEU,NEU,,,,,NEG,,,, 4217,"* Fig 2: better to show each original and reconstructed image close by (e.g. above below or side-by-side).[Fig-NEU], [PNF-NEU]",Fig,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4218,"The reconstructions show poor detail relative to the originals. [detail-NEG], [SUB-NEG]",detail,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4219,"This loss of detail could be a limitation.[limitation-NEG], [EMP-NEG]",limitation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4220,"* A serious problem with the method is that we are asked to evaluate it in terms of images like Fig 4 or Fig 8.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4221,"A serious study would involve domain experts and ascertain if Fig 4 conforms with what they are looking for.[study-NEU], [EMP-NEU]",study,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4222,"* The references section is highly inadequate -- no venues of publication are given.[references-NEG], [SUB-NEG, CMP-NEG]",references,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 4225,"* Overall: the paper contains an interesting idea, but given the deficiencies raised above I judge that it falls below the ICLR threshold.[paper-NEG], [APR-NEG]",paper,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 4226,"* Text: sec 2 para 4. reconstruction loss on the validation set was similar to the reconstruction loss on the validation set. ??[sec-NEU], [EMP-NEU]",sec,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4227,"* p 3 bottom -- give size of dataset * p 5 AUC curve -> ROC curve * p 6 Fig 4 use text over each image to better specify the details given in the caption. [null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 4231,"The proposed method is novel, but perhaps the most interesting aspect of this paper is that they demonstrate that ""DQNs are susceptible to periodically repeating mistakes"".[proposed method-POS], [NOV-POS]",proposed method,,,,,,NOV,,,,,POS,,,,,,POS,,,, 4232,"I believe this observation, though not entirely novel, will inspire many researchers to study catastrophic forgetting and propose improved strategies for handling these issues.[observation-POS, strategies-NEU], [IMP-POS]",observation,strategies,,,,,IMP,,,,,POS,NEU,,,,,POS,,,, 4233,"The paper is accurate, very well written (apart from a small number of grammatical mistakes) and contains appealing motivations to its key contributions.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4234,"In particular, I find the basic of idea of introducing a component that represents fear natural, promising and novel[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 4235,"Still, many of the design choices appear quite arbitrary and can most likely be improved upon.[design choices-NEG], [PNF-NEG]",design choices,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4236,"In fact, it is not difficult to design examples for which the proposed algorithm would be far from optimal.[examples-NEU], [EMP-NEU]",examples,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4237,"Instead I view the proposed techniques mostly as useful inspiration for future papers to build on. [proposed techniques-POS], [IMP-POS]",proposed techniques,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4238,"As a source of inspiration, I believe that this paper will be of considerable importance and I think many people in our community will read it with great interest.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4239,"The theoretical results regarding the properties of the proposed algorithm are also relevant, and points out some of its benefits,[theoretical results-POS, proposed algorithm-POS], [EMP-POS]",theoretical results,proposed algorithm,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 4240,"though I do not view the results as particularly strong. [results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4241,"To conclude, the submitted manuscript contains novel observations and results and is likely to draw additional attention to an important aspect of deep reinforcement learning.[manuscript-POS, observations-POS, results-POS], [NOV-POS, IMP-POS]",manuscript,observations,results,,,,NOV,IMP,,,,POS,POS,POS,,,,POS,POS,,, 4242,"A potential weakness with the paper is that the proposed strategies appear to be simple to improve upon and that they have not convinced me that they would yield good performance on a wider set of problems. [proposed strategies-NEG, performance-NEU], [EMP-NEG]",proposed strategies,performance,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 4246,"To be honest, I didn't really get this paper. *[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4247,"As far I understand, all of the original work policy gradients involved stochastic policies.[original work-NEU], [CMP-NEU]",original work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4250,"As far as I can tell, this is equivalent to a slightly different formulation, where the agent emits a deterministic action (mu,Sigma) and the environment samples an action from that distribution.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4251,"In other words, it seems that if we just draw the box a bit differently, the environment soaks up the nondeterminism, instead of needing to define a new type of Q-value.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4252,"Ultimately, I couldn't discern /why/ this was a significant advance for RL, or even a meaningful new perspective on classic ideas.[significant advance-NEU], [EMP-NEU]",significant advance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4253,"I thought the little 2-mode MOG was a nice example of the premise of the model.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4254,"While I may or may not have understood the core technical contribution, I think the experiments can be critiqued: they didn't really seem to work out.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4255,"Figures 2&3 are unconvincing - the differences do not appear to be statistically significant.[Figures-NEG], [PNF-NEG]",Figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4256,"Also, I was disappointed to see that the authors only compared to DDPG; they could have at least compared to TRPO, which they mention.[null], [CMP-NEG, SUB-NEG]",null,,,,,,CMP,SUB,,,,,,,,,,NEG,NEG,,, 4257,"They dismiss it by saying that it takes 10 times as long, but gets a better answer - to which I respond, Very well, run your algorithm 10x longer and see where you end up! [algorithm-NEG], [SUB-NEG, EMP-NEG, CMP-NEG]",algorithm,,,,,,SUB,EMP,CMP,,,NEG,,,,,,NEG,NEG,NEG,, 4258,"I think we need to see a more compelling demonstration of why this is a useful idea before it's ready to be published.[demonstration-NEG, idea-NEG], [SUB-NEG, REC-NEG]",demonstration,idea,,,,,SUB,REC,,,,NEG,NEG,,,,,NEG,NEG,,, 4259,"The idea of penalizing a policy based on KL-divergence from a reference policy was explored at length by Bert Kappen's work on KL-MDPs. Perhaps you should cite that? [idea-NEG], [CMP-NEG, SUB-NEG]]",idea,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 4262,"My strongest criticism for this paper is against the claim that Tumblr post represent self-reported emotions and that this method sheds new insight on emotion representation and my secondary criticism is a lack of novelty in the method, which seems to be simply a combination of previously published sentiment analysis module and previously published image analysis module, fused in an output layer.[paper-NEG, claim-NEG, method-NEG, novelty-NEG], [NOV-NEG, EMP-NEG]",paper,claim,method,novelty,,,NOV,EMP,,,,NEG,NEG,NEG,NEG,,,NEG,NEG,,, 4263,"The authors claim that the hashtags represent self-reported emotions, but this is not true in the way that psychologists query participants regarding emotion words in psychology studies.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4264,"Instead these are emotion words that a person chooses to broadcast along with an associated announcement.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4265,"As the authors point out, hashtags and words may be used sarcastically or in different ways from what is understood in emotion theory.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4269,"It should also be noted that the PANAS (Positive and Negative Affect Scale) scale and the PANAS-X (the ""X"" is for eXtended) scale are questionnaires used to elicit from participants feelings of positive and negative affect, they are not collections of core emotion words, but rather words that are colloquially attached to either positive or negative sentiment.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4270,"For example PANAS-X includes words like:""strong"" ,""active"", ""healthy"", ""sleepy"" which are not considered emotion words by psychology.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4271,"If the authors stated goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment they should be aware that this is exactly what PANAS is designed to do - not to infer the latent emotional state of a person, except to the extent that their affect is positive or negative.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4274,"These are short duration sates lasting only seconds.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4275,"They are also fairly specific, for example ""surprise"" is sudden reaction to something unexpected, which is it exactly the same as seeing a flower on your car and expressing ""what a nice surprise.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4276,""" The surprise would be the initial reaction of ""what's that on my car? Is it dangerous?"" but after identifying the object as non-threatening, the emotion of ""surprise"" would likely pass and be replaced with appreciation.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4278,"From the cited paper by Posner et al : The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4279,"This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4280,"From my reading of this paper, it is clear to me that the authors do not have a clear understanding of the current state of psychology's view of emotion representation and this work would not likely contribute to a new understanding of the latent structure of peoples' emotions.[work-NEG], [IMP-NEG]",work,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 4281,"In the PCA result, it is not clear that the first axis represents valence, as sad has a slight positive on this scale and sad is one of the emotions most clearly associated with negative valence.[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4282,"With respect to the rest of the paper, the level of novelty and impact is ok, but not good enough.[novelty-NEU, impact-NEU], [NOV-NEU, IMP-NEU]",novelty,impact,,,,,NOV,IMP,,,,NEU,NEU,,,,,NEU,NEU,,, 4283,"This analysis does not seem very different from Twitter analysis, because although Tumblr posts are allowed to be longer than Twitter posts, the authors truncate the posts to 50 characters.[analysis-NEU], [NOV-NEG]",analysis,,,,,,NOV,,,,,NEU,,,,,,NEG,,,, 4284,"Additionally, the images do not seem to add very much to the classification.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4285,"The authors algorithm also seems to be essentially a combination of two other, previously published algorithms.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4286,"For me the novelty of this paper was in its application to the realm of emotion theory, but I do not feel there is a contribution here.[contribution-NEU], [NOV-NEG, IMP-NEU]",contribution,,,,,,NOV,IMP,,,,NEU,,,,,,NEG,NEU,,, 4289,"Update: On further consideration (and reading the other reviews), I'm bumping my rating up to a 7.[rating-POS], [REC-POS]",rating,,,,,,REC,,,,,POS,,,,,,POS,,,, 4290,"I think there are still some issues, but this work is both valuable and interesting, and it deserves to be published (alongside the Naesseth et al. and Maddison et al. work).[issues-NEG, work-POS], [REC-POS]",issues,work,,,,,REC,,,,,NEG,POS,,,,,POS,,,, 4293,"They therefore propose using a more-biased but lower-variance bound to train the inference parameters, and the more-accurate bound to train the generative model.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4296,"Some comments: * Section 4: I found this argument extremely interesting.[Section-NEU, argument-POS], [EMP-POS]",Section,argument,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 4297,"However, it's worth noting that your argument implies that you could get an O(1) SNR by averaging K noisy estimates of I_K.[argument-POS], [EMP-POS]",argument,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4298,"Rainforth et al. suggest this approach, as well as the approach of averaging K^2 noisy estimates, which the theory suggests may be more appropriate if the functions involved are sufficiently smooth, which even for ReLU networks that are non-differentiable at a finite number of points I think they should be.[approach-NEU], [CMP-NEU, EMP-NEU]",approach,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 4299,"This paper would be stronger if it compared with Rainforth et al.'s proposed approaches.[paper-NEU], [CMP-NEU, EMP-NEU]",paper,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 4300,"This would demonstrate the real tradeoffs between bias, variance, and computation.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4301,"Of course, that involves O(K^2) or O(K^3) computation, which is a weakness.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4302,"But one could use a small value of K (say, K 5).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4303,"That said, I could also imagine a scenario where there is no benefit to generating multiple noisy samples for a single example versus a single noisy sample for multiple examples.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4304,"Basically, these all seem like interesting and important empirical questions that would be nice to explore in a bit more detail.[empirical questions-POS], [SUB-NEU]",empirical questions,,,,,,SUB,,,,,POS,,,,,,NEU,,,, 4305,"* Section 3.3: Claim 1 is an interesting observation.[Section-NEU, Claim-POS], [EMP-POS]",Section,Claim,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 4306,"But Propositions 1 and 2 seem to just say that the only way to get a perfectly tight SMC ELBO is to perfectly sample from the joint posterior.[Propositions-NEU], [EMP-NEU]",Propositions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4309,"The only way to get an SMC estimator's variance to 0 is to drive the variance of the weights to 0.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4311,"All of which is true as far as it goes, but I think it's a bit of a distraction.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4312,"The question is not ""what's it take to get to 0 variance"" but ""how quickly can we approach 0 variance"".[question-NEU], [EMP-NEU]",question,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4313,"In principle IS and SMC can achieve arbitrarily high accuracy by making K astronomically large.[accuracy-NEU], [EMP-NEU]",accuracy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4315,"MCMC is probably a better choice if one wants extremely low bias.)[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4316,"* Section 3.2: The choice of how to get low-variance gradients through the ancestor-sampling choice seems seems like an important technical challenge in getting this approach to work, but there's only a very cursory discussion in the main text.[challenge-NEU, approach-NEU, discussion-NEG], [EMP-NEG, SUB-NEG]",challenge,approach,discussion,,,,EMP,SUB,,,,NEU,NEU,NEG,,,,NEG,NEG,,, 4317,"I would recommend at least summarizing the main findings of Appendix A in the main text.[main findings-NEU, Appendix-NEU, main text-NEU], [PNF-NEU, SUB-NEU]",main findings,Appendix,main text,,,,PNF,SUB,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 4318,"* A relevant missing citation: Turner and Sahani's ""Two problems with variational expectation maximisation for time-series models"" (http://www.gatsby.ucl.ac.uk/~maneesh/papers/turner-sahani-2010-ildn.pdf).[citation-NEG], [CMP-NEG]",citation,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4320,"* Figure 1: What is the x-axis here?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4321,"Presumably phi is not actually 1-dimensional?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4322,"Typos etc.: * ""learn a particular series intermediate"" missing ""of"".[Typos-NEG], [CLA-NEG]",Typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4323,"* ""To do so, we generate on sequence y1:T"" s/on/a/, I think?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4324,"* Equation 3: Should there be a (1/K) in Z?[Equation-NEG], [CLA-NEG]",Equation,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4327,"The authors propose two new updates value propagation (VProp) and max propagation (MVProp), which are roughly speaking additive and multiplicative versions of the update used in the Bellman-Ford algorithm for shortest path.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4329,"I had some difficulty to understand the paper because of its presentation and writing (see below).[presentation-NEG, writing-NEG], [CLA-NEG, PNF-NEG]",presentation,writing,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 4331,"It seems this is not the case for VProp and MVProp, given the gradient updates provided in p.5.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4332,"As a consequence, those two methods need to take as input a new reward function for every new map.[methods-NEG], [EMP-NEG]",methods,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4334,"I think this could explain the better experimental results In the experimental part, the results for VIN are worse than those reported in Tamar et al.'s paper.[results-NEU], [CMP-NEU]",results,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4335,"Why did you use your own implementation of VIN and not Tamar et al.'s, which is publicly shared as far as I know?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4336,"I think the writing needs to be improved on the following points: - The abstract doesn't fit well the content of the paper.[writing-NEG, abstract-NEG], [CLA-NEG, PNF-NEG]",writing,abstract,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 4337,"For instance, its variants is confusing because there is only other variant to VProp.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4338,"Adversarial agents is also misleading because those agents act like automata.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4339,"- The authors should recall more thoroughly and precisely the work of Tamar et al., on which their work is based to make the paper more self-contained, e.g., (1) is hardly understandable.[work-NEG], [CMP-NEG]",work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4340,"- The writing should be careful, e.g., value iteration is presented as a learning algorithm (which in my opinion is not)[writing-NEU], [CLA-NEU]",writing,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 4341,"pi^* is defined as a distribution over state-action space and then pi is defined as a function; [null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4342,"... - The mathematical writing should be more rigorous;[mathematical writing-NEU], [PNF-NEU]",mathematical writing,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4343,"e.g., p.2: T: s to a to s', pi : s to a A denotes a set and its cardinal In (1), shouldn't it be Phi(o)?[mathematical writing-NEU], [CLA-NEU]",mathematical writing,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 4344,"all the new terms should be explained p. 3: definition of T and R shouldn't V_{ij}^k depend on Q_{aij}^k? T_{::aij} should be defined In the definition of h_{aij}, should Phi and b be indexed by a?[terms-NEU], [CLA-NEU]",terms,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 4345,"- The typos and other issues should be fixed: p. 3: K iteration with capable p.4: close 0 p.5: our our s^{t+1} should be defined like the other terms The state is represented by the coordinates of the agent and 2D environment observation should appear much earlier in the paper.[typos-NEG], [CLA-NEG]",typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4346,"pi_theta described in the previous sections, notation pi_theta appears the first time here... 3x3 -> 3 times 3 ofB V_{theta^t w^t}[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4347,"p.6: the the Fig.2's caption: What does both cases refer to? They are three models.[Fig-NEG], [CLA-NEG]",Fig,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4351,"It appears sometimes the entropy loss (which is not the main contribution of the paper) is essential to improve performance; this obscures the main contribution.[main contribution-NEU], [SUB-NEU]",main contribution,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4353,"I still can not see how previous work by Balestriero and Baraniuk 2017 motivates and backups the proposed method.[previous work-NEG, proposed method-NEG], [SUB-NEG, CMP-NEG]",previous work,proposed method,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 4354,"My rating of this paper would remain the same.[paper-NEU], [REC-NEU]",paper,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 4356,"Pros: The intuition is that the ReLU network output is locally linear for each input, and one can use the conjugate mapping (which is also linear) for reconstructing the inputs, as in PCA.[intuition-POS], [EMP-POS]",intuition,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4359,"This observation is neat in my opinion, and does suggest a different use of the Jacobian in deep learning.[observation-POS], [EMP-POS]",observation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4361,"Cons: The motivation (Section 2) needs to be improved.[motivation-NEG], [EMP-NEG]",motivation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4362,"In particular, the introduction/review of the work of Balestriero and Baraniuk 2017 not very useful to the readers.[work-NEG], [CMP-NEG]",work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4363,"Notations in eqns (2) and (3) are not fully explained (e.g., boldface c).[Notations-NEG, eqns-NEG], [SUB-NEG]",Notations,eqns,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 4364,"Intuition and implications of Theorem 1 is not sufficiently discussed: what do you mean by optimal DNN, what is the criteria for optimality?[Theorem-NEG, criteria-NEU], [EMP-NEG]",Theorem,criteria,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 4365,"is there a generative assumption of the data underlying the theorem? and the assumption of all samples being norm 1 seems too strong and perhaps limits its application?[assumption-NEG], [EMP-NEG]",assumption,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4366,"As far as I see, section 2 is somewhat detached from the rest of the paper.[section 2-NEG], [SUB-NEG]",section 2,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4367,"The main contribution of this paper is supposed to be the reconstruction mapping (6) and its effect in semi-supervised learning.[main contribution-NEU], [EMP-NEU]",main contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4368,"The introduction of entropy regularization in sec 2.3 seems somewhat odd and obscures the contribution.[sec-NEG, contribution-NEG], [EMP-NEG]",sec,contribution,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 4369,"It also bears the questions that how important is the entropy regularization vs. the reconstruction loss.[questions-NEG], [EMP-NEG]",questions,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4370,"In experiments, results with beta 1.0 need to be presented to assess the importance of network inversion and the reconstruction loss.[experiments-NEU, results-NEU], [EMP-NEU]",experiments,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4371,"Also, a comparison against typical auto-encoders (which uses another decoder networks, with weights possibly tied with the encoder networks) is missing.[comparison-NEG], [SUB-NEG]]",comparison,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4373,"While intriguing, a lot more work would be required to publish this at ICLR.[work-NEG], [REC-NEG, APR-NEG]",work,,,,,,REC,APR,,,,NEG,,,,,,NEG,NEG,,, 4374,"Namely, the following questions need to be answered: 1. Does using linked-word-pairs truly raise the state of the art?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4375,"Unlike what is stated in the abstract, the experimental results only compare RBMs with and without this feature.[experimental results-NEU], [CMP-NEU]",experimental results,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4376,"RBMs are not state-of-the-art in topic modeling, therefore it's difficult to assess whether this is helpful.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 4377,"2. If linked words does improve topic modeling, why does it do so?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4378,"There needs to be some sort of error analysis to show why this idea improves, rather than simply stating metrics.[error analysis-NEU], [EMP-NEU]",error analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4380,"Experiments need to be done to show that a full dependency parse is actually required, rather than simply looking for co-occuring words.[Experiments-NEU], [EMP-NEU]",Experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4381,"4. How is this work related to the extensive work in NLP in applying parsing to various tasks?[work-NEU], [CMP-NEU]",work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4384,"(creating a semantic vector space from a dependency parse) I suspect there are others[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 4386,"could be a good place to start.[Citations-NEU], [CMP-NEU]",Citations,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4387,"5. Can the selection of word pairs be done automatically, from data, rather than pre-computed with a known dependency parser?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4388,"After all, this is submitted to the International Conference on Learning Representations --- feature engineering papers can easily be published at EMNLP, ICML, etc. An excellent ICLR paper would show some way to either (a) use dependency parsing only at training time (to provide a hint), or (b) not require dependency parsing at all.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4389,"A few suggestions for experiments: A. I would recommend first doing comparisons between bag-of-words representation and the dependency-bigram representation, just using log(tf)-idf as a distance metric.[comparisons-NEU], [CMP-NEU]",comparisons,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4390,"By stripping away more advanced modeling, that could reveal whether the dependency bi-gram has utility[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4391,". B. The authors may wish to consider applying LSA to both bag of words and dependency-bigrams, using log(tf)-idf weighting for both[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 4392,". From what I've seen, log(tf)-idf LSA seems to perform about as well as LDA[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4393,". Plain LSA takes into account correlations between words --- it would be interesting to see whether dependency-bigrams can improve on LSA at all[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4394,". C. Reiterating point (3) above, to really show whether the power of the dependency parse is being used, I would strongly suggest doing a null experiment with co-occuring nearby words.[experiment-NEU], [SUB-NEU]",experiment,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4399,"SIGNIFICANCE AND ORIGINALITY: The authors propose to accelerate the learning of complex tasks by exploiting traces of experts.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 4400,"Unlike the most common form of imitation learning or behavioral cloning, the authors formulate their solution in the case where the expert's state trajectory is observable, but the expert's actions are not.[solution-NEU], [NOV-NEU]",solution,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 4402,"Within this specific setting the authors differentiate their approach from others by developing a solution that does NOT estimate an explicit dynamics model ( e.g., P( S' | S, A ) ).[approach-NEU], [EMP-NEU, CMP-NEU]",approach,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEU,,, 4403,"The benefits of not estimating an explicit action model are not really demonstrated in a clear way.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4404,"The author's articulate a specific solution that provides heuristic guidance rewards that cause the learner to favor actions that achieve subgoals calculated from expert behavior and refactors the representation of the Q function so that it has a component that is a function of the subgoal extracted from the expert.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 4405,"These subgoals are linear functions of the expert's change in state (or change in state features).[solution-NEU], [EMP-NEU]",solution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4408,"As far as I am aware, this is a novel approach to the problem.[approach-POS, problem-NEU], [NOV-POS]",approach,problem,,,,,NOV,,,,,POS,NEU,,,,,POS,,,, 4409,"The authors claim that this factorization is important and useful but the paper doesn't really illustrate this well.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4413,"The proposed approach does as well or better than competing approaches.[proposed approach-POS], [CMP-POS]",proposed approach,,,,,,CMP,,,,,POS,,,,,,POS,,,, 4414,"QUALITY Ablation studies show that the guidance rewards are important to achieving the improved performance of the proposed method which is important confirmation that the architecture is working in the intended way.[Ablation studies-NEU, performance-POS, architecture-NEU], [EMP-POS]",Ablation studies,performance,architecture,,,,EMP,,,,,NEU,POS,NEU,,,,POS,,,, 4415,"However, it would also be useful to do an ablation study of the ""factorization"" of action values. [ablation study-NEU], [SUB-NEU]",ablation study,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4416,"Is this important to achieving better results as well or is the guidance reward enough? [results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4417,"This seems like a key claim to establish.[claim-NEU], [EMP-NEU]",claim,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4418,"CLARITY The details of the memory based kernel density estimation and neural gradient training seemed complicated by the way that the process was implemented.[details-NEG], [EMP-NEG]",details,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4419,"Is it possible to communicate the intuitions behind what is going on?[intuitions-NEU], [EMP-NEU]",intuitions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4420,"I was able to work out the intuitions behind the heuristic rewards, but I still don't clearly get what the Q-value factorization is providing:[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4436,"There is only one layer here so we don't have complex non-linear things going on?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4439,"Q( S,a ) g(S) Wa S + Ba So this allows the Q-function more flexibility to capture each subgoal in a different linear space?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4441,"It allows the subgoal to adjust the value of the underlying model?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4442,"Essentially the expert defines a new Q-value problem at every state for the learner?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4443,"In some sense are we are defining a model for the action taken by the expert?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4444,"ADDITIONAL THOUGHTS While the authors compare to an unassisted baseline, they don't compare to methods that use an action model which is not a fatal flaw but would have been nice.[baseline-NEU, methods-NEU], [CMP-NEU, EMP-NEG]",baseline,methods,,,,,CMP,EMP,,,,NEU,NEU,,,,,NEU,NEG,,, 4445,"One can imagine there might be scenarios where the local guidance rewards of this form could be problematic, particularly in scenarios where the expert and learner are not identical and it is possible to return to previous states, such as the grid worlds the authors discuss:[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4446,"If the expert's first few transitions were easily approximable, the learner would get local rewards that cause it to mimic expert behavior.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4447,"However, if the next step in the expert's path was difficult to approximate, then the reward for imitating the expert would be lower.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4448,"Would the learner then just prefer to go back towards those states that it can approximate and endlessly loop?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4449,"In this case, perhaps expressing heuristic rewards as potentials as described in Ng's shaping paper might solve the problem.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4450,"PROS AND CONS Important problem generally.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4451,"Avoiding the estimation of a dynamics model was stated as a given, but perhaps more could be put into motivating this goal.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4452,"Hopefully it is possible to streamline the methodology section to communicate the intuitions more easily. [methodology section-NEU], [PNF-NEU]",methodology section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4454,"I find this paper not suitable for ICLR.[paper-NEG], [APR-NEG]",paper,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 4455,"All the results are more or less direct applications of existing optimization techniques, and not provide fundamental new understandings of the learning REPRESENTATION.[results-NEG], [EMP-NEG, NOV-NEG]",results,,,,,,EMP,NOV,,,,NEG,,,,,,NEG,NEG,,, 4461,"Comments: - Why use a model-free technique like Q-learning especially when one knows the model of the car in autonomous driving setting and can simply run model-predictive control (MPC) (convolve forward the model to get candidate trajectories of certain reasonable horizon, evaluate and pick the best trajectory, execute selected trajectory for a few time-steps and then rinse-and-repeat.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4462,"This is a very well-accepted method actually used in real-world autonomous cars.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4463,"See the Urmson et al. 2008 paper in the bibliography.) At the very least this technique should be a baseline.[technique-NEU, baseline-NEU], [CMP-NEU]",technique,baseline,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 4464,"This method is not learning-based, doesn't need training data in a simulator, generalizes to **any** exit and lane configuration and variants of this basic technique continue to be used on real-world autonomous cars.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4465,"- What kind of safety constraints cannot be expressed by masking actions?[safety constraints-NEU], [EMP-NEU]",safety constraints,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4466,"It seems that most safety constraints can be expressed via masking.[safety constraints-NEU], [EMP-NEU]",safety constraints,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4467,"But certain kinds of safety constraints like 'do not drive in the blindspot of other vehicles' sometimes require the ego car to speed up for a bit beyond the speed limit to pass the blindspot area and then slow down.[safety constraints-NEU], [EMP-NEU]",safety constraints,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4470,"Figure 1c is pretty unrealistic to obtain for a real vehicle, especially for the four cars near the top where the topmost vehicles would be occluded at least partially from the vantage point of the ego-car. [Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4473,"I found the formal presentation of the model reasonably clear the the empirical evaluation reasonably compelling.[presentation-POS, empirical evaluation-POS], [PNF-POS, EMP-POS]",presentation,empirical evaluation,,,,,PNF,EMP,,,,POS,POS,,,,,POS,POS,,, 4474,"In my opinion the main weakness of the paper is the focus on the RACE dataset.[dataset-NEG, paper-NEG], [EMP-NEG]",dataset,paper,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 4475,"This dataset has not attracted much attention and most work in reading comprehension has now moved to the SQUAD dataset for which there is an active leader board.[dataset-NEG], [IMP-NEG]",dataset,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 4476,"I realize that SQUAD is not explicitly multiple choice and that this is a challenge for an answer elimination architecture.[challenge-NEG], [EMP-NEG]",challenge,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4477,"However, it seems that answer elimination might be applied to each choice of the initial position of a possible answer span.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4478,"In any case, competing with an active leader board would be much more compelling.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 4482,". Clarity: The paper is well-written and clear[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4483,". The authors could be more concise when reporting results[results-NEU], [CLA-NEU]",results,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 4484,". I would suggest keeping the main results in the main body and move extended results to an appendix.[results-NEU, appendix-NEU], [PNF-NEU]",results,appendix,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 4486,". More specifically, the provide empirical evidence that they can fit problems where Jensen-Shannon divergence fails.[null], [EMP-POS, CMP-POS]",null,,,,,,EMP,CMP,,,,,,,,,,POS,POS,,, 4488,"Significance: The problems the authors consider is worth exploring further[problems-POS], [IMP-POS]",problems,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4489,". The authors describe their finding in the appropriate level of details and demonstrate their findings experimentally[finding-POS], [EMP-POS]",finding,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4490,". However, publishing this work is in my opinion premature for the following reasons: - The authors do not provide further evidence of why non-saturating GANs perform better or under which mathematical conditions (non-saturating) GANs will be able to handle cases where distribution manifolds do not overlap[work-NEG], [REC-NEG, EMP-NEG]",work,,,,,,REC,EMP,,,,NEG,,,,,,NEG,NEG,,, 4491,"; - The authors show empirically the positive effect of penalized gradients, but do not provide an explanation grounded in theory[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4492,"; - The authors do not provide practical recommendations how to set-up GANs and not that these findings did not lead to a bullet-proof recipe to train them. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4497,"2- I think the authors should provide a more detailed and formal description of the OPTMARGIN method.[description-NEU], [SUB-NEU]",description,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4498,"In section 3.2 they explain that Our attack uses existing optimization attack techniques to..., but one should be able to understand the method without reading further references.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4499,"Specially a formal representation of the method should be included.[method-NEU], [SUB-NEU]",method,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4501,"What is the meaning of success rate in here?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4502,"Is it the % of times that the classifier is confused?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4503,"4- OPTSTRONG produces images that are notably more distorted than OPTBRITTLE (by RMS and also visually in the case of MNIST).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 4504,"So I actually cannot tell which method is better, at least in the MNIST experiment.[method-NEU, experiment-NEU], [CMP-NEU, EMP-NEG]",method,experiment,,,,,CMP,EMP,,,,NEU,NEU,,,,,NEU,NEG,,, 4507,"Generated CIFAR images seem similar than the originals, although CIFAR images are very low resolution, so judging this is hard.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4508,"4- As a side note, it would be interesting to have an explanation about why region classification is providing a worse accuracy than point classification for CIFAR-10 benign samples.[explanation-NEU], [SUB-NEG]",explanation,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 4510,"I would like to see more formal definitions of the methods presented.[definitions-NEU], [SUB-NEU]",definitions,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4511,"Also, just by looking at RMS it is expected that this method works better than OPTBRITTLE, since the images are more distorted.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4512,"It would be needed to have a way of visually evaluate the similarity between original images and generated images.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4517,"This paper helps others to better understand the vulnerabilities of DNNs.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4522,"Pros: (1) the paper is very well organized and easy to read[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 4523,". (2) the proposed method is nicely designed to solve the specific real problem. For example, the edit distance is modified to be more consistent with the task.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4524,"(3) detailed information are provided about the experiments, such as data, model and inference.[experiments-POS], [SUB-POS]",experiments,,,,,,SUB,,,,,POS,,,,,,POS,,,, 4525,"Cons: (1) No direct comparisons with other methods are provided.[comparisons-NEG], [CMP-NEG]",comparisons,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4527,"If the performance (hit rate or coverage) of this paper is near stoa methods, then such experimental results will make this paper much more solid[performance-NEU, experimental results-NEU], [CMP-NEU, EMP-NEU]",performance,experimental results,,,,,CMP,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 4531,"The main novelty of this work are 1-balancing mechanism for the replay memory.[novelty-POS], [NOV-POS]",novelty,,,,,,NOV,,,,,POS,,,,,,POS,,,, 4532,"2-Using multiple models for short and long term memory. [null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 4533,"The most interesting aspect of the paper is using a generate model as replay buffer which has been introduced before.[aspect-POS], [EMP-POS]",aspect,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4534,"As explained in more detail below, it is not clear if the novelties introduced in this paper are important for the task or if they are they are tackling the core problem of catastrophic forgetting.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4535,"The paper claims using the task ID (either from Oracle or from a HMM) is an advantage of the model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4536,"It is not clear to me as why is the case, if anything it should be the opposite.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4537,"Humans and animal are not given task ID and it's always clear distinction between task in real world.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4538,"Deep Generative Replay section and description of DGDMN are written poorly and is very incomprehensible.[description-NEG], [CLA-NEG]",description,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4539,"It would have been more comprehensive if it was explained in more shorter sentences accompanied with proper definition of terms and an algorithm or diagram for the replay mechanism.[definition-NEU], [CLA-NEU]",definition,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 4540,"Using the STTM during testing means essentially (number of STTM) + 1 models are used which is not same as preventing one network from catastrophic forgetting.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4541,"Baselines: why is Shin et al. (2017) not included as one of the baselines?[baselines-NEU], [CMP-NEG]",baselines,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 4542,"As it is the closet method to this paper it is essential to be compared against.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 4543,"I disagree with the argument in section 4.2. A good robust model against catastrophic forgetting would be a model that still can achieve close to SOTA.[argument-NEG], [EMP-NEU]",argument,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 4544,"Overfitting to the latest task is the central problem in catastrophic forgetting which this paper avoids it by limiting the model capacity.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4545,"12 pages is very long, 8 pages was the suggested page limit. It's understandable if the page limit is extend by one page, but 4 pages is over stretching.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 4549,"A considerable amount of prior work has investigated reformulating unsupervised word embedding objectives to incorporate external resources for improving representation learning.[prior work-POS], [SUB-POS]",prior work,,,,,,SUB,,,,,POS,,,,,,POS,,,, 4550,"The methodologies of Kiela et al (2015) and Bollegala et al (2016) are very similar to those proposed in this work.[work-NEU], [EMP-NEU]",work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4551,"The main originality seems to be captured in Algorithm 1, which computes the strength between two words.[Algorithm-NEU], [NOV-NEU]",Algorithm,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 4552,"Unlike prior work, this is a real-valued instead of a binary quantity.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4553,"Because this modification is not particularly novel, I believe this paper should primarily be judged based upon the effectiveness of the method rather than the specifics of the underlying techniques.[paper-NEU], [NOV-NEU, EMP-NEU]",paper,,,,,,NOV,EMP,,,,NEU,,,,,,NEU,NEU,,, 4554,"In this light, the performance relative to the baselines is particularly important.[performance-NEU, baselines-NEU], [CMP-NEU]",performance,baselines,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 4555,"From the results reported in Tables 1, 2, and 3, I do not see compelling evidence that +V, +A, +D, or +VAD consistently lead to significant performance increases relative to the baseline methods. [results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4556,"I therefore cannot recommend this paper for publication.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 4559,"Overall the authors seems to have captured the essence of a large number of popular CF models and I found that the proposed model classification is reasonable.[proposed model-POS], [EMP-POS]",proposed model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4560,"The notation also make it easy to understand the differences between different models.[notation-POS], [PNF-POS]",notation,,,,,,PNF,,,,,POS,,,,,,POS,,,, 4561,"In that sense this paper could be useful to researchers wanting to better understand this field.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4562,"It may also be useful to develop further insights into current models (although the authors do not go that route).[current models-POS], [IMP-POS]",current models,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4563,"The impact of this paper may be limited in this community since it is a survey about a fairly niche topic (a subset of recommender systems) that may not be of central interest at ICLR.[paper-NEG], [APR-NEG, IMP-NEG]",paper,,,,,,APR,IMP,,,,NEG,,,,,,NEG,NEG,,, 4564,"Overall, I think this paper would be a better fit in a recsys, applied ML or information retrieval journal.[paper-NEU], [APR-NEU]",paper,,,,,,APR,,,,,NEU,,,,,,NEU,,,, 4565,"A few comments: I find that there are several ways the paper could make a stronger contribution:[paper-NEU, contribution-NEU], [IMP-NEU]",paper,contribution,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 4566,"1) Use the unifying notation to discuss strengths and weaknesses of current approaches (ideally with insights about possible future approaches).[notation-NEU], [PNF-NEU]",notation,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4567,"n2) Report the results of a large study of many of the surveyed models on a large number of datasets.[surveyed models-NEU, datasets-NEU], [EMP-NEU]",surveyed models,datasets,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4568,"Ideally further insights could be derived from these results.[insights-NEU, results-NEU], [IMP-NEU]",insights,results,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 4569,"n3) Provide a common code framework with all methods[framework-NEU, methods-NEU], [EMP-NEU]",framework,methods,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4570,"n4) Add a discussion on more structured sources of covariates (e.g., social networks).[discussion-NEG], [SUB-NEG]",discussion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4571,"This could probably more or less easily be added as a subsection using the current classification.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 4572,"- A similar classification of collaborative filtering models with covariates is proposed in this thesis (p.41): https://tspace.library.utoronto.ca/bitstream/1807/68831/1/Charlin_Laurent_201406_PhD_thesis.pdf [thesis-NEU], [CMP-NEU]",thesis,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4573,"- The paper is well written overall[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4574,"but the current version of the paper contains several typos.[typos-NEG], [PNF-NEG]]",typos,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4578,"1) The arguments for using clusters instead of single sentences are questionable. [arguments-NEU], [EMP-NEU]",arguments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4580,"It is not clear why that is not used or at least compared to the method presented.[method-NEG], [EMP-NEG, CMP-NEG]",method,,,,,,EMP,CMP,,,,NEG,,,,,,NEG,NEG,,, 4581,"2) The writing of the paper is often unclear (and sometimes grammatically wrong, typos etc. but that aside), there are some made up words/concepts (What is 'Golden Centroid Augmentation or Model Centroid Augmentation?[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4582,"The reason for attention is not to better memorize input information, it is to be able to attend to certain regions in the input.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4583,"The reason to use RL is to focus on optimizing directly for BLEU score or other metrics instead of likelihood but not for improving on the train/test loss discrepancy.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4584,"There are lots more examples of unclear statements in this paper -- it should be heavily improved.[statements-NEG], [CLA-NEG]",statements,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4585,"3) Section 3 and 4 are very hard/impossible to understand, it is not clear how the formulas help the reader to better understand the concept in any way.[Section-NEG], [EMP-NEG]",Section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4586,"5) The results presented in this paper given the complexity of the method are just not great -- for example, WMT en-de is 21.3 BLEU reported by you while much older papers report for example 24.67 BLEU (Google's Neural Machine Translation System) -- why not first try to get to state-of-the-art with already published methods and then try to improve on top of that? . [results-NEG], [EMP-NEG, CMP-NEG]",results,,,,,,EMP,CMP,,,,NEG,,,,,,NEG,NEG,,, 4587,"6) Finally, what is missing most is simply why a much simpler method (just generate some data using a trained system and use that as additional training data, with details on how much etc.) -- is not directly compared to this very complicated looking method.[method-NEG], [CMP-NEG]",method,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4589,"This paper proposes a new learning method, called federated learning, to train a centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections.[paper-POS, method-POS], [NOV-POS, EMP-POS]",paper,method,,,,,NOV,EMP,,,,POS,POS,,,,,POS,POS,,, 4591,"The studied problem in this paper seems to be interesting, and with potential application in real settings like mobile phone-based learning.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4592,"Furthermore, the paper is easy to read with good organization.[paper-POS, organization-POS], [PNF-POS]",paper,organization,,,,,PNF,,,,,POS,POS,,,,,POS,,,, 4593,"However, there exist several major issues which are listed as follows:[issues-NEG], [EMP-NEG]",issues,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4595,"This learning procedure is heuristic, and there is no theoretical guarantee about the correctness (convergence) of this learning procedure.[correctness-NEU, procedure-NEU], [EMP-NEU]",correctness,procedure,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4596,"The authors do not provide any analysis about what can be learned from this learning procedure.[analysis-NEG], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 4597,"Secondly, both structured update and sketched update methods adopted by this paper are some standard techniques which have been widely used in existing works.[paper-NEU, existing works-NEG], [CMP-NEG]",paper,existing works,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 4598,"Hence, the novelty of this paper is limited.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4599,"Thirdly, experiments on larger datasets, such as ImageNet, will improve the convincingness.[experiments-NEU, datasets-NEU], [SUB-NEU]]",experiments,datasets,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 4600,"Although the problem addressed in the paper seems interesting,[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4601,"but there lacks of evidence to support some of the arguments that the authors make.[evidence-NEG, arguments-NEG], [SUB-NEG]",evidence,arguments,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 4602,"And the paper does not contribute novelty to representation learning, therefore, it is not a good fit for the conference.[novelty-NEG], [APR-NEG, NOV-NEG]",novelty,,,,,,APR,NOV,,,,NEG,,,,,,NEG,NEG,,, 4603,"Detailed critiques are as following:1. The idea proposed by the authors seems too quite simple.[idea-NEG], [EMP-NEG]",idea,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4604,"It is just performing random projections for 1000 times and choose the set of projection parameters that results in the highest compactness as the dimensionality reduction model parameter before one-class SVM.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4606,"It would be nicer if they include results for all S_{attack} values that they have used in their experiments, which would also give the reader insights on how the anomaly detection performance degrades when the S_attack value change.[results-NEU, experiments-NEU, performance-NEU], [SUB-NEU]",results,experiments,performance,,,,SUB,,,,,NEU,NEU,NEU,,,,NEU,,,, 4607,"3. The paper claims that the nonlinear random projection is a defence against adversary due to the randomness, but there is no results in the paper proving that other non-random projections are susceptible to adversary that is designed to target that projection mechanism and nonlinear random projection is able to get away with that.[paper-NEG, results-NEG], [EMP-NEG, SUB-NEG]",paper,results,,,,,EMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 4608,"And PCA as a non-random projection would a nice baseline to compare against.[baseline-NEU], [CMP-NEU]",baseline,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4609,"4. The paper seems to misuse the term ""False positive rate"" as the y label of figure 3(d/e/f).[paper-NEG], [PNF-NEG]",paper,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4610,"The definition of false positive rate is FP/(FP+TN), so if the FPR 1 it means that all negative samples are labeled as positive.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4611,"So it is surprising to see FPR 1 in Figure 3(d) when feature dimension 784 while the f1 score is still high in Figure 3(a).[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4612,"From what I understand, the paper means to present the percentage of adversarial examples that are misclassified instead of all the anomaly examples that get misclassified.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4613,"The paper should come up with a better term for that evaluation.[paper-NEU, evaluation-NEU], [PNF-NEG]",paper,evaluation,,,,,PNF,,,,,NEU,NEU,,,,,NEG,,,, 4614,"5. The conclusion, that robustness of the learned model increases wrt the integrity attacks increases when the projection dimension becomes lower, cannot be drawn from Figure 3(d).[model-NEU, Figure-NEU], [EMP-NEG]",model,Figure,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 4615,"Need more experiment on more dimensionality to prove that. 6. In the appendix B results part, sometimes the word 'S_attack' is typed wrong. And the values in ""distorted/distorted"" columns in Table 5 do not match up with the ones in Figure 3(c).[Table-NEG], [EMP-NEG]",Table,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4617,"This paper proposes a method for multitask and few-shot learning by completing a performance matrix (which measures how well the classifier for task i performs on task j).[paper-NEU, method-NEU], [EMP-NEU]",paper,method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4622,"However, in MTL, we usually assume that there are not enough samples to learn each task, and so this performance matrix may not be reliable.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4638,"is not.[proposed method-NEG, approach-NEG], [EMP-NEG]",proposed method,approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 4639,"For few-shot learning, the authors mentioned that the alpha's are adaptable parameters but did not mention how they are adapted.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 4640,"Experimental results are not convincing.[Experimental results-NEG], [EMP-NEG]",Experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4641,"- Comparison with existing clustered MTL methods mentioned above are missing.[methods-NEG], [EMP-NEG]",methods,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4642,"- As mentioned above, the proposed method can be computationally expensive (when used for MTL), but no timing results are reported.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4643,"- As the authors mentioned in section 4.2, most of the tasks have a significant amount of training data (and single-task baselines achieve good results), and so this is not a good benchmark dataset for MTL.[section-NEG, baselines-NEG, results-NEG], [EMP-NEG]]",section,baselines,results,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 4644,"** post-rebuttal revision ** I thank the authors for running the baseline experiments, especially for running the TwinNet to learn an agreement between two RNNs going forward in time.[baseline experiments-POS], [EMP-POS]",baseline experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4645,"This raises my confidence that what is reported is better than mere distillation of an ensemble of rnns.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4646,"I am raising the score.[score-POS], [REC-POS]",score,,,,,,REC,,,,,POS,,,,,,POS,,,, 4651,"The method can be interpreted to generalize other recurrent network regularizers, such as putting an L2 loss on the hidden states.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4652,"Experiments indicate that the approach is most successful when the regularized RNNs are conditional generators, which emit sequences of low entropy, such as decoders of a seq2seq speech recognition network.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4655,"I have one question about baselines: is the proposed approach better than training to forward generators and force an agreement between them (in the spirit of the concurrent ICLR submission https://openreview.net/forum?id rkr1UDeC-)?[baselines-NEU], [EMP-NEU]",baselines,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4656,"Also, would using the backward RNN, e.g. for rescoring, bring another advantage?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4657,"In other words, what is (and is there) a gap between an ensemble of a forward and backward rnn and the forward-rnn only, but trained with the state-matching penalty?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4658,"Quality: The proposed approach is well motivated and the experiments show the limits of applicability range of the technique.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4659,"Clarity: The paper is clearly written[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4660,". Originality: The presented idea seems novel.[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 4661,"Significance: The method may prove to be useful to regularize recurrent networks, however I would like to see a comparison with ensemble methods.[method-POS, comparison-NEU], [IMP-POS, CMP-NEU, SUB-NEU]",method,comparison,,,,,IMP,CMP,SUB,,,POS,NEU,,,,,POS,NEU,NEU,, 4662,"Also, as the authors note the method seems to be limited to conditional sequence generators.[method-NEU], [IMP-NEU]",method,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 4663,"Pros and cons: Pros: the method is simple to implement, the paper lists for what kind of datasets it can be used.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4664,"Cons: the method needs to be compared with typical ensembles of models going only forward in time, it may turn that it using the backward RNN is not necessary [method-NEU], [CMP-NEU, SUB-NEU]",method,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 4670,"Furthermore, in Section 4 a new method is proposed, that is to combine the best parts of the already existing models in the literature.[Section-NEU, models-NEU, method-NEU], [CMP-NEU, EMP-NEU]",Section,models,method,,,,CMP,EMP,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 4671,"Unfortunately, the experiments is Section 5 reveal that the proposed method yields results that are at most comparable with the existing methods.[experiments-NEG, Section-NEG, results-NEG], [CMP-NEG]",experiments,Section,results,,,,CMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 4672,"The paper is written well and provides good insights (mostly taxonomy) on the existing methods for neural network-based clustering.[paper-POS, insights-POS], [CLA-POS, CMP-POS]",paper,insights,,,,,CLA,CMP,,,,POS,POS,,,,,POS,POS,,, 4673,"However, the paper lacks novel content.[novel content-NEG], [NOV-NEG]",novel content,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4674,"The novel content of the paper sums up to the proposed method, that is composed of building blocks of existing models, and fails to impress in experimental results.[experimental results-NEG], [EMP-NEG]",experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4675,"It could be that this paper belongs to another venue that is more appropriate for survey papers.[paper-NEG, venue-NEG], [APR-NEG, REC-NEG]",paper,venue,,,,,APR,REC,,,,NEG,NEG,,,,,NEG,NEG,,, 4676,"Also, it overall rather appears short. [null], [APR-NEG, SUB-NEG]]",null,,,,,,APR,SUB,,,,,,,,,,NEG,NEG,,, 4683,"This is used for learning initial feature representation of the student model.[student model-NEU], [EMP-NEU]",student model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4684,"Crucially, the teacher model will also rely on these learned features.[teacher model-NEU, learned features-NEU], [EMP-NEU]",teacher model,learned features,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4685,"Labelled data and unlabelled data are therefore lie in the same dimensional space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4686,"Specific questions to be addressed: 1)tClustering of strongly-labelled data points.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4687,"Thinking about the statement ""each an expert on this specific region of data space"", if this is the case, I am expecting a clustering for both strongly-labelled data points and weakly-labelled data points.[statement-NEU], [EMP-NEU]",statement,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4688,"Each teacher model is trained on a portion of strongly-labelled data, and will only predict similar weakly-labelled data.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4689,"On a related remark, the nice side effect is not right as it was emphasized that data points with a high-quality label will be limited.[side effect-NEG], [EMP-NEG]",side effect,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4691,"It will be informative to provide results with a single GP model.[results-NEU], [SUB-NEU]",results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4692,"2)tFrom modifying learning rates to weighting samples.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4693,"Rather than using uncertainty in label annotation as a multiplicative factor in the learning rate, it is more ""intuitive"" to use it to modify the sampling procedure of mini-batches (akin to baseline #4); sample with higher probability data points with higher certainty.[baseline-NEU, sample-NEU], [EMP-NEU]",baseline,sample,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4694,"Here, experimental comparison with, for example, an SVM model that takes into account instance weighting will be informative, and a student model trained with logits (as in knowledge distillation/model compression).[experimental comparison-NEU], [SUB-NEU]]",experimental comparison,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4703,"The method beats existing methods for text classification including d-LSTMs , BoWs, and ngram TFIDFs on held out classification accuracy.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4704,"the choice of baselines is convincing.[baselines-POS], [CMP-POS]",baselines,,,,,,CMP,,,,,POS,,,,,,POS,,,, 4705,"What is the performance of the proposed method if the embeddings are initialized to pretrained word embeddings and a) trained for the classification task together with randomly initialized context units b) frozen to pretrained embeddings and only the context units are trained for the classification task?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4706,"The introduction was fine.[introduction-NEU], [PNF-NEU]",introduction,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4707,"Until page 3 the authors refer to the context units a couple of times without giving some simple explanation of what it could be.[page-NEU, explanation-NEG], [CLA-NEU]",page,explanation,,,,,CLA,,,,,NEU,NEG,,,,,NEU,,,, 4708,"A simple explanation in the introduction would improve the writing.[explanation-NEU, introduction-NEU], [PNF-NEU]",explanation,introduction,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 4709,"The related work section only makes sense *after* there is at least a minimal explanation of what the local context units do.[related work-NEU], [PNF-NEU]",related work,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4710,"A simple explanation of the method, for example in the introduction, would then make the connections to CNNs more clear.[explanation-NEU], [CLA-NEU, SUB-NEU]",explanation,,,,,,CLA,SUB,,,,NEU,,,,,,NEU,NEU,,, 4711,"Also, in the related work, the authors could include more citations (e.g. the d-LSTM and the CNN based methods from Table 2) and explain the qualitative differences between their method and existing ones.[related work-NEU, citations-NEU], [SUB-NEU, CMP-NEU]",related work,citations,,,,,SUB,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 4712,"The authors should consider adding equation numbers.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 4713,"The equation on the bottom of page 3 is fine, but the expressions in 3.2 and 3.3 are weird.[expressions-NEG], [PNF-NEG]",expressions,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4714,"A more concise explanation of the context-word region embeddings and the word-context region embeddings would be to instead give the equation for r_{i,c}.[explanation-NEU, equation-NEU], [EMP-NEU]",explanation,equation,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4715,"The included baselines are extensive and the proposed method outperforms existing methods on most datasets.[baselines-POS, proposed method-POS], [EMP-POS, CMP-POS]",baselines,proposed method,,,,,EMP,CMP,,,,POS,POS,,,,,POS,POS,,, 4716,"In section 4.5 the authors analyze region and embedding size, which are good analyses to include in the paper.[section-POS], [EMP-POS]",section,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4717,"Figure 2 and 3 could be next to each other to save space.[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4718,"I found the idea of multi region sizes interesting, but no description is given on how exactly they are combined.[idea-POS, description-NEG], [EMP-POS, SUB-NEG]",idea,description,,,,,EMP,SUB,,,,POS,NEG,,,,,POS,NEG,,, 4719,"Since it works so well, maybe it could be promoted into the method section?[method section-NEU], [PNF-NEU]",method section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4720,"Also, for each data set, which region size worked best?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4721,"Qualitative analysis: It would have been nice to see some analysis of whether the learned embeddings capture semantic similarities, both at the embedding level and at the region level.[analysis-NEU], [SUB-NEU]",analysis,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4722,"It would also be interesting to investigate the columns of the context units, with different columns somehow capturing the importance of relative position.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 4723,"Are there some words for which all columns are similar meaning that their position is less relevant in how they affect nearby words?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 4724,"And then for other words with variation along the columns of the context units, do their context units modulate the embedding more when they are closer or further away?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4725,"Pros: + simple model + strong quantitative results[model-POS, quantitative results-POS], [EMP-POS]",model,quantitative results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 4726,"Cons: - notation (i.e. precise definition of r_{i,c})[notation-NEG], [PNF-NEU]",notation,,,,,,PNF,,,,,NEG,,,,,,NEU,,,, 4727,"- qualitative analysis could be extended - writing could be improved [qualitative analysis-NEU, writing-NEU], [EMP-NEU, CLA-NEU]",qualitative analysis,writing,,,,,EMP,CLA,,,,NEU,NEU,,,,,NEU,NEU,,, 4732,"There was a concern or assumption in the original DTP paper about the target for the penultimate layer (before the output layer) which seems to have been excessive, i.e., the DTP propagation rule actually works on the last layer and there is no need to use the exact gradient propagation for it, at least according to these experiments.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4733,"In call cases, the variant using the DTP target update everywhere works about as well as using the true gradient for the output layer.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4734,"Another quirk that the proposed variant (SDTP) removes from the orignal DTP paper is the way noise is handled, and I agree that denoising makes a lot of sense (than noise preservation) while being more biologically plausible. [null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4735,"Finally, the authors did a good job of establishing a benchmark which could be used by others attempting to evaluate new biologically plausible alternatives to backprop.[benchmark-POS], [EMP-POS]",benchmark,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4736,"The paper is very clear and I have just outlined the original contributions and significance (DTP may have been a bit forgotten and is worth another look, apparently).[paper-POS, contributions-POS, significance-POS], [CLA-POS, IMP-POS]",paper,contributions,significance,,,,CLA,IMP,,,,POS,POS,POS,,,,POS,POS,,, 4737,"In the negatives, the paper should mention in the discussion and intro that all the TP variants ignore the issue of dynamics.[discussion-NEU, intro-NEU], [PNF-NEU]",discussion,intro,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 4738,"We know that there are of course lateral connections and that feedback connections do not operate independently of the feedforward one (or there would be a need for a precise 'clockwork' mechanism to sweep layers forward and backward, which seems not very plausible).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4739,"In the experimental results section, it would be good to report the CNN results as well (with shared weights, same architecture)[experimental results section-NEU], [SUB-NEU]",experimental results section,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4740,". Also, training errors should be shown, since I suspect that underfitting may be happening especially in the case of ImageNet.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4741,"If that was the case, future work should first explore higher capacity (which may require larger-memory GPUs...).[future work-NEU], [IMP-NEU]",future work,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 4742,"Finally, in the description of architectures, please define the structure notation, e.g. (3 x 3, 32, 2, SAME).[description-NEU], [PNF-NEU]",description,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4744,"The paper compares some recently proposed method for validation of properties of piece-wise linear neural networks and claims to propose a novel method for the same.[method-POS], [NOV-POS]",method,,,,,,NOV,,,,,POS,,,,,,POS,,,, 4745,"Unfortunately, the proposed branch and bound method does not explain how to implement the bound part (compute lower bound) -- and has been used several times in the same application,;[method-NEG], [EMP-NEG, NOV-NEG]",method,,,,,,EMP,NOV,,,,NEG,,,,,,NEG,NEG,,, 4747,"Specifically, the authors say: In our experiments, we use the result of minimising the variable corresponding to the output of the network, subject to the constraints of the linear approximation introduced by Ehlers (2017a) which sounds a bit like using linear programming relaxations, which is what the approaches using branch and bound cited above use.[experiments-NEU, result-NEU], [EMP-NEU]",experiments,result,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4748,"If that is the case, the paper does not have any original contribution.[contribution-NEG], [NOV-NEG]",contribution,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4749,"If that is not the case, the authors may have some contribution to make, but have not made it in this paper, as it does not explain the lower bound computation other than the one based on LPs.[contribution-NEG], [IMP-NEG]",contribution,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 4750,"Generally, I find a jarring mis-fit between the motivation (deep learning for driving, presumably involving millions or billions of parameters) and the actual reach of the methods proposed (hundreds of parameters).[motivation-NEG, methods-NEG], [EMP-NEG]",motivation,methods,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 4751,"This reach is NOT inherent in integer programming, per se.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4753,"The authors may hence consider improving the LP relaxation, noting that the big-M constraint are notorious for producing weak relaxations.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4757,"It seems useful and practical to compute value iteration explicitly as this will propagate values for us without having to learn the propagated form through extensive gradient update steps. [null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4758,"Extending to the scenario of non-stationary dynamics is important to make the idea applicable to common problems.[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4759,"The work is therefore original and significant.[work-POS], [NOV-POS, IMP-POS]",work,,,,,,NOV,IMP,,,,POS,,,,,,POS,POS,,, 4760,"The algorithm is evaluated on the original obstacle grids from Tamar 2016 and larger grids generated to test scalability.[algorithm-POS], [EMP-POS]",algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4761,"The authors Prop and MVProp are able to solve the grids with much higher reliability at the end of training and converge much faster.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4762,"The M in MVProp in particular seems to be very useful in scaling up to the large grids.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4763,"The authors also show that the algorithm handles non-stationary dynamics in an avalanche task where obstacles can fall over time.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4764,"QUALITY The symbol d_{rew} is never defined u2014 what does ""new"" stand for?[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 4765,"It appears to be the number of latent convolutional filters or channels generated by the state embedding network. [null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 4766,"Section 2.2 Sentence 2: The final layer representing the encoding is given as ( R^{d_rew x d_x x d_y }.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4767,"Based on the description in the first paragraph of section 2, it sounds like d_rew might be the number of channels or filters in the last convolutional layer.[description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4768,"In equation 1, it wasn't obvious to me that the expression max_a q_{ij}^{k-1} q^{k} corresponds to an actual operation?[equation-NEU], [EMP-NEU]",equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4769,"The h( Phi( x ), v^{k-1} ) sort of makes sense ... value is only calculated with respect to only the observation of the maze obstacles but the policy pi is calculated with respect to the joint observation and agent state.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4771,"> + b makes sense and reminds me of the Value Iteration network work where we take the previous value function, combine it with the reward function and use convolution to compute the expectation (the weights Wa encode the effect of transitions).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4772,"I gather the tensor Wa R^{|A| x (d_{rew} x d_x x d_y } both converts the feature embedding phi{o} to rewards and represents the transition / propagation of reward across states due to transitions and discounts at the same time? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4773,"I didn't understand the r^in, r&out representation in section 4.1. These are given by the domain?[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4774,"I did get the overall idea of efficiently creating a local value function in the neighborhood of the current state and passing this to the policy so that it can make a local decision.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4775,"A bit more detail defining terms, explaining their intuitive role and how the output of one module feeds into the next would be helpful.[detail-NEU], [SUB-NEU]",detail,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4777,"- It looks like the typos in the equations got fixed - The new phrase enables to learn to plan seems pretty awkward.[typos-POS], [CLA-POS]",typos,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4781,"It is shown empirically that the constrained update does not diverge on Baird's counter example and improves performance in a grid world domain and cart pole over DQN.[performance-POS], [EMP-POS]",performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4782,"This paper is reasonably readable.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4783,"The derivation for the constraint is easy to understand and seems to be an interesting line of inquiry that might show potential.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 4784,"The key issue is that the justification for the constrained gradients is lacking.[justification-NEG], [EMP-NEU]",justification,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 4785,"What is the effect, in terms of convergence, in modifying the gradient in this way?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4786,"It seems highly problematic to simply remove a whole part of the gradient, to reduce effect on the next state.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4787,"For example, if we are minimizing the changes our update will make to the value of the next state, what would happen if the next state is equivalent to the current state (or equivalent in our feature space)? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4788,"In general, when we project our update to be orthogonal to the maximal change of the next states value, how do we know it is a valid direction in which to update?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4789,"I would have liked some analysis of the convergence results for TD learning with this constraint, or some better intuition in how this effects learning.[analysis-NEU, intuition-NEU], [EMP-NEU]",analysis,intuition,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4790,"At the very least a mention of how the convergence proof would follow other common proofs in RL.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 4791,"This is particularly important, since GTD provides convergent TD updates under nonlinear function approximation; the role for a heuristic constrained TD algorithm given convergent alternatives is not clear.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4792,"For the experiments, other baselines should be included, particularly just regular Q-learning.[experiments-NEU, baselines-NEU], [CMP-NEU]",experiments,baselines,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 4793,"The primary motivation comes from the use of a separate target network in DQN, which seems to be needed in Atari (though I am not aware of any clear result that demonstrates why, rather just from informal discussions).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4794,"Since you are not running experiments on Atari here, it is invalid to simply assume that such a second network is needed.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4795,"A baseline of regular Q-learning should be included for these simpler domains.[baseline-NEU], [EMP-NEU]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4796,"The results in Baird's counter example are discouraging for the new constraints.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4797,"Because we already have algorithms which better solve this domain, why is your method advantageous?[method-NEU], [CMP-NEU, EMP-NEU]",method,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 4798,"The point of showing your algorithm not solve Baird's counter example is unclear.[algorithm-NEU], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 4799,"There are also quite a few correctness errors in the paper, and the polish of the plots and language needs work, as outlined below. [errors-NEU], [PNF-NEU]",errors,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4800,"There are several mistakes in the notation and background section.[background section-NEG], [PNF-NEG]",background section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4801,"1. ""If we consider TD-learning using function approximation, the loss that is minimized is the squared TD error.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4802,""" This is not true; rather, TD minimizes the mean-squared project Bellman error.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4803,"Further, L_TD is strangely defined: why a squared norm, for a scalar value? [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4804,"2. The definition of v and delta_TD w.r.t. to v seems unnecessary, since you only use Q.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4805,"As an additional (somewhat unimportant) point, the TD-error is usually defined as the negative of what you have.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4806,"3. In the function approximation case the value function and q functions parameterized by theta are only approximations of the expected return.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4807,"4. Defining the loss w.r.t. the state, and taking the derivative of the state w.r.t. to theta is a bit odd.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4808,"Likely what you meant is the q function, at state s_t?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4809,"Also, are ignoring the gradient of the value at the next step?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4810,"If so, this further means that this is not a true gradient.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4811,"There is a lot of white space around the plots, which could be used for larger more clear figures.[figures-NEU], [PNF-NEU]",figures,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4812,"The lack of labels on the plots makes them hard to understand at a glance, and the overlapping lines make finding certain algorithm's performance much more difficult.[labels-NEG, performance-NEG], [PNF-NEG]",labels,performance,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 4813,"I would recommend combining the plots into one figure with a drawing program so you have more control over the size and position of the plots.[figure-NEU], [PNF-NEU]",figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 4814,"Examples of odd language choices: t-t""The idea also does not immediately scale to nonlinear function approximation.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4815,"Bhatnagar et al. (2009) propose a solution by projecting the error on the tangent plane to the function at the point at which it is evaluated.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 4817,"What do you mean does not scale to nonlinear function approximation?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4820,"- ""the gradient at s_{t+1} that will change the value the most"" - This is too colloquial.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4821,"I think you simply mean the gradient of the value function, for the given s_t, but its not clear. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4824,"I found that the paper suffers many shortcomings that must be addressed: 1) The writing and organization is quite cumbersome and should be improved.[writing-NEG, organization-NEG], [CLA-NEG, PNF-NEG]",writing,organization,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 4825,"2) The authors state in the abstract (and elsewhere): ... showing that (model free) policy gradient methods globally converge to the optimal solution .... This is misleading and NOT true.[abstract-NEU], [EMP-NEG]",abstract,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 4826,"The authors show the convergence of the objective but not of the iterates sequence.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4827,"This should be rephrased elsewhere.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 4828,"3) An important literature on convergence of descent-type methods for semialgebraic objectives is available but not discussed.[literature-NEG], [SUB-NEG, CMP-NEG]",literature,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 4833,"My primary concern about this paper is the lack of interpretation on permuting the layers.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4835,"It is confusing why permuting these filters make sense.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4836,"They accept different inputs (raw pixels vs edges).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4837,"Moreover, if the network contains pooling layers, different locations of the pooling layer result in different shapes of the feature map, and the soft ordering strategy Eq. (7) does not work.[Eq-NEG], [EMP-NEG]",Eq,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4838,"It makes sense that the more flexible model proposed by this paper performs better than previous models.[model-POS], [EMP-POS, CMP-POS]",model,,,,,,EMP,CMP,,,,POS,,,,,,POS,POS,,, 4840,"But I still wonder the effect of permuting the layers.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4841,"The paper also needs more clarifications in the writing.[clarifications-NEU], [CLA-NEU]",clarifications,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 4842,"For example, in Section 3.3, how each s_(i, j, k) is sampled from S?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4843,"The parallel ordering terminology also seems to be arbitrary...[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 4847,"This paper can be seen as an extension of the paper attention is all you need that will be published at nips in a few weeks (at the time I write this review).[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4851,"The idea is interesting and trendy.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4852,"However, the paper is not really stand alone.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4853,"A lot of tricks are stacked to reduce the performance degradation.[tricks-NEU, performance-NEU], [EMP-NEG]",tricks,performance,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 4854,"However, they're sometimes to briefly described to be understood by most readers.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 4855,"The training process looks highly elaborate with a lot of hyper parameters.[process-NEU], [SUB-NEU, EMP-NEU]",process,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 4856,"Maybe you could comment on this.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 4857,"For instance, the use fertility supervision during training could be better motivated and explained.[training-NEG], [SUB-NEG, EMP-NEG]",training,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 4858,"Your choice of IBM 2 is wired since it doesn't include fertility.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4859,"Why not IBM 4, for instance ?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4860,"How you use IBM model for supervision.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4861,"This a simple example, but a lot of things in this paper is too briefly described and their impact not really evaluated.[example-NEU, paper-NEG], [SUB-NEG, IMP-NEG]]",example,paper,,,,,SUB,IMP,,,,NEU,NEG,,,,,NEG,NEG,,, 4862,"## Review Summary Overall, the paper's paper core claim, that increasing batch sizes at a linear rate during training is as effective as decaying learning rates, is interesting but doesn't seem to be too surprising given other recent work in this space.[recent work-NEG], [CMP-NEG]",recent work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 4863,"The most useful part of the paper is the empirical evidence to backup this claim, which I can't easily find in previous literature.[paper-POS], [CMP-POS]",paper,,,,,,CMP,,,,,POS,,,,,,POS,,,, 4864,"I wish the paper had explored a wider variety of dataset tasks and models to better show how well this claim generalizes, better situated the practical benefits of the approach (how much wallclock time is actually saved?[paper-NEG, approach-NEG], [SUB-NEG, EMP-NEU]",paper,approach,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEU,,, 4865,"how well can it be integrated into a distributed workflow?), and included some comparisons with other recent recommended ways to increase batch size over time.[comparisons-NEU], [CMP-NEU]",comparisons,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4866,"## Pros / Strengths + effort to assess momentum / Adam / other modern methods[methods-POS], [EMP-POS]",methods,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4867,"+ effort to compare to previous experimental setups[experimental setups-POS], [CMP-POS]",experimental setups,,,,,,CMP,,,,,POS,,,,,,POS,,,, 4868,"## Cons / Limitations - lack of wallclock measurements in experiments[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4869,"- only ~2 models / datasets examined, so difficult to assess generalization[models-NEG, datasets-NEG], [SUB-NEG]",models,datasets,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 4870,"- lack of discussion about distributed/asynchronous SGD[discussion-NEG], [SUB-NEG]",discussion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4871,"## Significance Many recent previous efforts have looked at the importance of batch sizes during training, so topic is relevant to the community.[topic-POS], [IMP-POS]",topic,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4878,"## Quality Overall, only single training runs from a random initialization are used.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4879,"It would be better to take the best of many runs or to somehow show error bars, to avoid the reader wondering whether gains are due to changes in algorithm or to poor exploration due to bad initialization.[algorithm-NEG], [SUB-NEG]",algorithm,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4880,"This happens a lot in Sec. 5.2.[Sec-NEU], [EMP-NEU]",Sec,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4881,"Some of the experimental setting seem a bit haphazard and not very systematic.[experimental setting-NEG], [PNF-NEG]",experimental setting,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4882,"In Sec. 5.2, only two learning rate scales are tested (0.1 and 0.5).[Sec-NEG], [SUB-NEG]",Sec,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4883,"Why not examine a more thorough range of values?[range-NEG], [SUB-NEG]",range,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4884,"Why not report actual wallclock times?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4885,"Of course having reduced number of parameter updates is useful, but it's difficult to tell how big of a win this could be.[parameter updates-NEG], [EMP-NEG]",parameter updates,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4886,"What about distributed SGD or asyncronous SGD (hogwild)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4888,"If we scale up to batch sizes of ~ N/10, we can only get 10x speedups in parallelization (in terms of number of parameter updates).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4889,"I think there is some subtle but important discussion needed on how this framework fits into modern distributed systems for SGD.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4890,"## Clarity Overall the paper reads reasonably well.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 4891,"Offering a related work feature matrix that helps readers keep track of how previous efforts scale learning rates or minibatch sizes for specific experiments could be valueable.[related work feature matrix-NEU], [SUB-NEU]",related work feature matrix,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4892,"Right now, lots of this information is just provided in text, so it's not easy to make head-to-head comparisons.[information-NEG], [PNF-NEG]",information,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4893,"Several figure captions should be updated to clarify which model and dataset are studied.[figure captions-NEG, model-NEG], [PNF-NEG]",figure captions,model,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 4894,"For example, when skimming Fig. 3's caption there is no such information.[Fig-NEG], [PNF-NEG]",Fig,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 4899,"Section 2 motivates the suggested linear scaling using previous SGD analysis from Smith and Le (2017).[Section-NEU], [CMP-NEU]",Section,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4907,"The paper shows this is effective by transferring the examples to 3D objects that are color 3D-printed and show some nice results.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4908,"The experimental results and video showing that the perturbation is effective for different camera angles, lighting conditions and background is quite impressive.[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4909,"This work convincingly shows that adversarial examples are a real-world problem for production deep-learning systems rather than something that is only academically interesting.[work-POS], [EMP-POS]",work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4910,"However, the authors claim that standard techniques require complete control and careful setups (e.g. in the camera case) is quite misleading, especially with regards to the work by Kurakin et. al.[standard techniques-NEG], [EMP-NEG]",standard techniques,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4911,"This paper also seems to have some problems of its own (for example the turtle is at relatively the same distance from the camera in all the examples, I expect the perturbation wouldn't work well if it was far enough away that the camera could not resolve the HD texture of the turtle).[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4912,"One interesting point this work raises is whether the algorithm is essentially learning universal perturbations (Moosavi-Dezfooli et. al).[work-POS], [CMP-POS]",work,,,,,,CMP,,,,,POS,,,,,,POS,,,, 4913,"If that's the case then complicated transformation sampling and 3D mapping setup would be unnecessary.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4914,"This may already be the case since the training set already consists of multiple lighting, rotation and camera type transformations so I would expect universal perturbations to already produce similar results in the real-world.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4915,"Minor comments: Section 1.1: a affine -> an affine Typo in section 3.4: of a of a[Section-NEG, Typo-NEG], [PNF-NEG]",Section,Typo,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 4916,"It's interesting in figure 9 that the crossword puzzle appears in the image of the lighthouse.[figure-POS], [PNF-POS]",figure,,,,,,PNF,,,,,POS,,,,,,POS,,,, 4920,"The numerical experiments show a significant improvement in accuracy of the approach.[numerical experiments-POS, accuracy-POS, approach-POS], [EMP-POS]]",numerical experiments,accuracy,approach,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 4922,"The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not).[idea-POS, results-POS], [CMP-POS]",idea,results,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 4924,"While the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts),;[approach-POS, experiments-POS, theoretical insights-POS], [EMP-POS, NOV-POS]",approach,experiments,theoretical insights,,,,EMP,NOV,,,,POS,POS,POS,,,,POS,POS,,, 4925,"the paper lacks clarity and sufficient details.[clarity-NEG, details-NEG], [CLA-NEG, SUB-NEG]",clarity,details,,,,,CLA,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 4927,"I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments.[architecture-NEU, experiments-NEU], [PNF-NEU]",architecture,experiments,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 4928,"Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level.[null], [PNF-NEU, EMP-NEU]",null,,,,,,PNF,EMP,,,,,,,,,,NEU,NEU,,, 4929,"It is hard to understand sufficiently well what the formalism means without more insigh[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 4930,"t. Also, the experiments need more details. For example, it is not clear what the numbers in Table 2 mean.[experiments-NEG, Table-NEG], [PNF-NEG, SUB-NEG]",experiments,Table,,,,,PNF,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 4933,"The problem with the RWA is that the averaging mechanism can be numerically unstable due to the accumulation operations when computing d_t.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4934,"Pros: - Addresses an issue of RWAs.[issue-POS], [EMP-POS]",issue,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4935,"Cons: -The paper addresses a problem with an issue with RWAs. [problem-NEG], [EMP-NEG]",problem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4936,"But it is not clear to me why would that be an important contribution.[contribution-NEU], [NOV-NEG, IMP-NEU]",contribution,,,,,,NOV,IMP,,,,NEU,,,,,,NEG,NEU,,, 4937,"-The writing needs more work.[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4938,"-The experiments are lacking and the results are not good enough.[experiments-NEG, results-NEG], [SUB-NEG, EMP-NEG]",experiments,results,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 4939,"General Comments: This paper addresses an issue regarding to RWA which is not really widely adopted and well-known architecture, because it seems to have some have some issues that this paper is trying to address.[issue-NEG], [NOV-NEG]",issue,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 4940,"I would still like to have a better justification on why should we care about RWA and fixing that model.[justification-NEU], [EMP-NEU]",justification,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4941,"The writing of this paper seriously needs more work.[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 4942,"The Lemma 1 doesn't make sense to me, I think it has a typo in it, it should have been (-1)^t c instead of -1^t c.[Lemma-NEG, typo-NEG], [CLA-NEG, PNF-NEG]",Lemma,typo,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 4943,"The experiments are only on toyish and small scale tasks.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 4944,"According to the results the model doesn't really do better than a simple LSTM or GRU.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4946,"(Score before author revision: 4) (Score after author revision: 7) I think the authors have taken both the feedback of reviewers as well as anonymous commenters thoroughly into account, running several ablations as well as reporting nice results on an entirely new dataset (MultiNLI) where they show how their multi level fusion mechanism improves a baseline significantly.[ablations-POS, results-POS], [SUB-POS, REC-POS]",ablations,results,,,,,SUB,REC,,,,POS,POS,,,,,POS,POS,,, 4947,"I think this is nice since it shows how their mechanism helps on two different tasks (question answering and natural language inference).[mechanism-POS], [EMP-POS]",mechanism,,,,,,EMP,,,,,POS,,,,,,POS,,,, 4948,"Therefore I would now support accepting this paper.[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 4954,"Results on SQuAD show a small gain in accuracy (75.7->76.0 Exact Match).[Results-NEU, accuracy-NEU], [EMP-NEU]",Results,accuracy,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4955,"The gains on the adversarial set are larger but that is because some of the higher performing, more recent baselines don't seem to have adversarial numbers.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4956,"The authors also compare various attention functions (Table 5) showing a particularone (Symmetric + ReLU) works the best.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 4957,"Comments: -I feel overall the contribution is not very novel.[contribution-NEU], [NOV-NEG]",contribution,,,,,,NOV,,,,,NEU,,,,,,NEG,,,, 4958,"The general neural architecture that the authors propose in Section 3 is generally quite similar to the large number of neural architectures developed for this dataset (e.g. some combination of attention between question/context and LSTMs over question/context).[architecture-NEU, Section-NEU], [NOV-NEU, EMP-NEU]",architecture,Section,,,,,NOV,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 4959,"The only novelty is these HoW inputs to the extra attention mechanism that takes a richer word representation into account.[novelty-NEU], [NOV-NEU]",novelty,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 4960,"-I feel the model is seems overly complicated for the small gain (i.e. 75.7->76.0 Exact Match), especially on a relatively exhausted dataset (SQuAD) that is known to have lots of pecularities (see anonymous comment below).[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4961,"It is possible the gains just come from having more parameters.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4962,"-The authors (on page 6) claim that that by running attention multiple times with different parameters but different inputs (i.e. alpha_{ij}^l, alpha_{ij}^h, alpha_{ij}^u) it will learn to attend to different regions for different level.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4963,"However, there is nothing enforcing this and the gains just probably come from having more parameters/complexity.[null], [SUB-NEG, EMP-NEG]",null,,,,,,SUB,EMP,,,,,,,,,,NEG,NEG,,, 4966,"The extension of Gaussian Processes to Gaussian Process Neurons is reasonably straight forward, with the crux of the paper being the path taken to extend GPNs from intractable to tractable.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 4968,"These are temporary and are later made redundant.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4969,"To avoid the intractable marginalization over latent variables, the paper applies variational inference to approximate the posterior within the context of given training data.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 4970,"Overall the process by which GPNs are made tractable to train leverages many recent and not so recent techniques.[process-NEU], [EMP-NEU]",process,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4971,"The resulting model is theoretically scalable to arbitrary datasets as the total model parameters are independent of the number of training samples.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4972,"It is unfortunate but understandable that the GPN model experiments are confined to another paper.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 4975,"Although the BN paper suggests using BN before non-linearity many articles have been using BN after non-linearity which then gives normalized activations (https://github.com/ducha-aiki/caffenet-benchmark/blob/master/batchnorm.md) and also better overall performance.[performance-NEU], [CMP-NEU]",performance,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 4977,"I encourage the authors to validate their claims against simple approach of using BN after non-linearity. [claims-NEU], [EMP-NEU]",claims,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 4982,"It also showed good online A/B test performance, which indicates that this approach has been tested in real world.[performance-POS, approach-POS], [EMP-POS]",performance,approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 4983,"Two small concerns: 1. In Section 3.3. I am not fully sure why the proposed predictor model is able to win over LSTM.[Section-NEU, model-NEU], [EMP-NEU]",Section,model,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 4985,"Some insights might be useful there.[insights-NEU], [SUB-NEU]",insights,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 4986,"2. The title of this paper is weird.[title-NEU], [PNF-NEG]",title,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 4987,"Suggest to rephrase unreasonable to something more positive. [null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 4993,"Extensive experiments shows that VCL performs very well, compared with some state-of-the-art methods.[experiments-POS], [SUB-POS, CMP-POS]",experiments,,,,,,SUB,CMP,,,,POS,,,,,,POS,POS,,, 4995,"Both ideas have been investigated in Bayesian literature, while (2) has been recently investigated in continual learning.[ideas-NEU], [NOV-NEU]",ideas,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 4996,"Therefore, the authors seems to be the first to investigate the effectiveness of (1) for continual learning.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 4997,"From extensive experiments, the authors find that the first idea results in VCL which can outperform other state-of-the-art approaches, while the second idea plays little role. [experiments-POS], [CMP-POS, EMP-POS]",experiments,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 4998,"The finding of the effectiveness of idea (1) seems to be significant. [finding-POS], [IMP-POS]",finding,,,,,,IMP,,,,,POS,,,,,,POS,,,, 4999,"The authors did a good job when providing a clear presentation, a detailed analysis about related work, an employment to deep discriminative models and deep generative models, and a thorough investigation of empirical performance.[presentation-POS, related work-POS, empirical performance-POS], [PNF-POS, EMP-POS]",presentation,related work,empirical performance,,,,PNF,EMP,,,,POS,POS,POS,,,,POS,POS,,, 5000,"There are some concerns the authors should consider: - Since the coreset plays little role in the superior performance of VCL, it might be better if the authors rephrase the title of the paper.[title-NEU], [PNF-NEU]",title,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5002,"Their finding of the effectiveness of online variational inference for continual learning should be reflected in the writing of the paper as well.[finding-NEU], [CLA-NEU]",finding,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 5003,"- It is unclear about the sensitivity of VCL with respect to the size of the coreset. The authors should investigate this aspect.[null], [SUB-NEU, EMP-NEU]",null,,,,,,SUB,EMP,,,,,,,,,,NEU,NEU,,, 5004,"- What is the trade-off when the size of the coreset increases? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5010,"The evaluation is extensive and mostly very good.[evaluation-POS], [EMP-POS]",evaluation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5011,"Substantial data set of 29m lines of code.[data set-POS], [EMP-POS]",data set,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5013,"Nice ablation studies.[ablation studies-POS], [EMP-POS]",ablation studies,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5014,"I would have liked to see separate precision and recall rather than accuracy.[precision-NEG, recall-NEG], [SUB-NEG]",precision,recall,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5015,"The current 82.1% accuracy is nice to see,[accuracy-POS], [EMP-POS]",accuracy,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5016,"but if 18% of my program variables were erroneously flagged as errors, the tool would be useless.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5017,"I'd like to know if you can tune the threshold to get a precision/recall tradeoff that has very few false warnings, but still catches some errors.[precision-NEU], [EMP-NEU]",precision,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5018,"Nice work creating an implementation of fast GGNNs with large diverse graphs.[work-POS], [EMP-POS]",work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5020,"Great to see that the method is fast---it seems fast enough to use in practice in a real IDE.[method-POS], [IMP-POS]",method,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5021,"The model (GGNN) is not particularly novel, but I'm not much bothered by that.[model-NEG], [NOV-NEG]",model,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5022,"I'm very happy to see good application papers at ICLR.[papers-POS], [APR-POS]",papers,,,,,,APR,,,,,POS,,,,,,POS,,,, 5023,"I agree with your pair of sentences in the conclusion: Although source code is well understood and studied within other disciplines such as programming language research, it is a relatively new domain for deep learning.[sentences-POS, conclusion-POS], [NOV-POS]",sentences,conclusion,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 5024,"It presents novel opportunities compared to textual or perceptual data, as its (local) semantics are well-defined and rich additional information can be extracted using well-known, efficient program analyses.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5025,"I'd like to see work in this area encouraged. So I recommend acceptance.[work-POS], [REC-POS]",work,,,,,,REC,,,,,POS,,,,,,POS,,,, 5026,"If it had better (e.g. ROC curve) evaluation and some modeling novelty, I would rate it higher still.[evaluation-NEU], [EMP-NEU, REC-NEU]",evaluation,,,,,,EMP,REC,,,,NEU,,,,,,NEU,NEU,,, 5027,"Small notes: The paper uses the term data flow structure without defining it.[paper-NEG, term-NEG], [SUB-NEG]",paper,term,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5029,"Perhaps future work will see if the results are much different in other languages.[future work-NEU, results-NEU], [IMP-NEU]]",future work,results,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 5033,"Results show that a Seq2Tree model outperforms a Seq2Seq model, that adding search to Seq2Tree improves results,[Results-NEU, model-NEU], [EMP-NEU]",Results,model,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5034,"and that search without any training performs worse, although the experiments assume that only a fixed number of programs are explored at test time regardless of the wall time that it takes a technique.[experiments-NEG, technique-NEU], [EMP-NEG]",experiments,technique,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5035,"Strengths: - Reasonable approach, quality is good[approach-POS], [CLA-POS, EMP-POS]",approach,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 5036,"- The DSL is richer than that of previous related work like Balog et al. (2016).[previous related work-POS], [EMP-POS]",previous related work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5037,"- Results show a reasonable improvement in using a Seq2Tree model over a Seq2Seq model, which is interesting.[Results-POS, model-POS], [CMP-POS, EMP-POS]",Results,model,,,,,CMP,EMP,,,,POS,POS,,,,,POS,POS,,, 5038,"Weaknesses: - There are now several papers on using a trained neural network to guide search, and this approach doesn't add too much on top of previous work.[approach-NEG, previous work-NEG], [CMP-NEG]",approach,previous work,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 5039,"Using beam search on tree outputs is a bit of a minor contribution.[contribution-NEU], [NOV-NEU]",contribution,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 5040,"- The baselines are just minor variants of the proposed method.[baselines-NEG, proposed method-NEG], [CMP-NEG]",baselines,proposed method,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 5041,"It would be stronger to compare against a range of different approaches to the problem, particularly given that the paper is working with a new dataset.[approaches-NEG, dataset-NEG], [SUB-NEG]",approaches,dataset,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5042,"- Data is synthetic, and it's hard to get a sense for how difficult the presented problem is, as there are just four example problems given.[Data-NEG, example problems-NEG], [CMP-NEG, SUB-NEG]",Data,example problems,,,,,CMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 5043,"Questions: - Why not compare against Seq2Seq + Search?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5044,"- How about comparing wall time against a traditional program synthesis technique (i.e., no machine learning), ignoring the descriptions.[null], [CMP-NEG, SUB-NEG]",null,,,,,,CMP,SUB,,,,,,,,,,NEG,NEG,,, 5045,"I would guess that an efficiently-implemented enumerative search technique could quickly explore all programs of depth 3, which makes me skeptical that Figure 4 is a fair representation of how well a non neural network-based search could do.[Figure-NEG], [EMP-NEU, PNF-POS]",Figure,,,,,,EMP,PNF,,,,NEG,,,,,,NEU,POS,,, 5046,"- Are there plans to release the dataset?[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5047,"Could you provide a large sample of the data at an anonymized link?[large sample-NEU, data-NEU], [SUB-NEU]",large sample,data,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 5048,"I'd re-evaluate my rating after looking at the data in more detail.[rating-NEU, data-NEU], [REC-NEU]]",rating,data,,,,,REC,,,,,NEU,NEU,,,,,NEU,,,, 5052,"To me, there is a major flaw in the approach.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5058,"This assumption is not met in the HealthGathering environment as several different states may generate very similar vision features.[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5059,"This causes the method not to work.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5060,"This brings us back to the fact that features encoding the actual dynamics, potentially on many consecutive states (e.g. feature expectations used in IRL or occupancy probability used in Ho and Ermon 2016), are mandatory.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5061,"The method is also very close to the simplest IRL method possible which consists in placing positive rewards on every state the expert visited.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5062,"So I would have liked a comparison to that simple method (using similar regression technique to generalize over states with similar features).[method-NEU], [CMP-NEU]",method,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5063,"Finally, I also think that using expert data generated by a pre-trained network makes the experimental section very weak.[experimental section-NEG], [EMP-NEG]",experimental section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5064,"Indeed, it is unlikely that this kind of data can be obtained and training on this type of data is just a kind of distillation of the optimal network making the weights of the network close to the right optimum.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5066,"Concerning the related work, the authors didn't mention the Universal Value Function Approximation (Schaul et al, @ICML 2015) which precisely extends V and Q functions to generalize over goals.[related work-NEG], [CMP-NEG]",related work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 5067,"This very much relates to the method used to generalize over subgoals in the paper.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5068,"Also, the state if the art in IRL and learning from demonstration is lacking a lot of references.[references-NEG], [SUB-NEG, CMP-NEG]",references,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 5072,"BTW, I would suggest to refer to published papers if they exist instead of their Arxiv version (e.g. Hester et al, DQfD). [published papers-NEU], [PNF-NEU]",published papers,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5077,"The algorithms are shown to be consistent, and demonstrated to be more efficient than an existing semi-dual algorithm.[algorithms-POS], [EMP-POS]",algorithms,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5079,"These algorithms seem to be an improvement over the current state of the art for this problem setting,[algorithms-POS], [CMP-POS]",algorithms,,,,,,CMP,,,,,POS,,,,,,POS,,,, 5080,"although more of a discussion of the relationship to the technique of Genevay et al. would be useful: how does your approach compare to the full-dual, continuous case of that paper if you simply replace their ball of RKHS functions with your class of deep networks?[discussion-NEG, approach-NEU], [SUB-NEU, CMP-NEU]",discussion,approach,,,,,SUB,CMP,,,,NEG,NEU,,,,,NEU,NEU,,, 5081,"The consistency properties are nice,[properties-POS], [EMP-POS]",properties,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5083,"The proofs are clear, and seem correct on a superficial readthrough; I have not carefully verified them.[proofs-POS], [EMP-POS]",proofs,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5084,"The proofs are mainly limited in that they don't refer in any way to the class of approximating networks or the optimization algorithm, but rather only to the optimal solution.[proofs-NEG, optimal solution-NEG], [SUB-NEG]",proofs,optimal solution,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5085,"Although of course proving things about the actual outcomes of optimizing a deep network is extremely difficult, it would be helpful to have some kind of understanding of how the class of networks in use affects the solutions.[outcomes-NEG, solutions-NEG], [EMP-NEG]",outcomes,solutions,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5086,"In this way, your guarantees don't say much more than those of Arjovsky et al., who must assume that their critic function reaches the global optimum: essentially you add a regularization term, and show that as the regularization decreases it still works, but under seemingly the same kind of assumptions as Arjovsky et al.'s approach which does not add an explicit regularization term at all.[assumptions-NEG], [CMP-NEG]",assumptions,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 5087,"Though it makes sense that your regularization might lead to a better estimator, you don't seem to have shown so either in theory or empirically.[theory-NEG], [SUB-NEG]",theory,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5088,"The performance comparison to the algorithm of Genevay et al. is somewhat limited: it is only on one particular problem, with three different hyperparameter settings.[algorithm-NEG], [CMP-NEG]",algorithm,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 5089,"Also, since Genevay et al. propose using SAG for their algorithm, it seems strange to use plain SGD; how would the results compare if you used SAG (or SAGA/etc) for both algorithms?[algorithm-NEU, results-NEU], [CMP-NEU]",algorithm,results,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 5090,"In discussing the domain adaptation results, you mention that the L2 regularization works very well in practice, but don't highlight that although it slightly outperforms entropy regularization in two of the problems, it does substantially worse in the other.[results-NEG, problems-NEG], [EMP-NEG]",results,problems,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5091,"Do you have any guesses as to why this might be?[guesses-NEU], [EMP-NEU]",guesses,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5092,"For generative modeling: you do have guarantees that, *if* your optimization and function parameterization can reach the global optimum, you will obtain the best map relative to the cost function.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5093,"But it seems that the extent of these guarantees are comparable to those of several other generative models, including WGANs, the Sinkhorn-based models of Genevay et al. (2017, https://arxiv.org/abs/1706.00292/), or e.g. with a different loss function the MMD-based models of Li, Swersky, and Zemel (ICML 2015) / Dziugaite, Roy, and Ghahramani (UAI 2015).[models-POS], [CMP-POS]",models,,,,,,CMP,,,,,POS,,,,,,POS,,,, 5094,"The different setting than the fundamental GAN-like setup of those models is intriguing, but specifying a cost function between the source and the target domains feels exceedingly unnatural compared to specifying a cost function just within one domain as in these other models.[models-NEG], [EMP-NEG]",models,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5095,"Minor: In (5), what is the purpose of the -1 term in R_e?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5096,"It seems to just subtract a constant 1 from the regularization term.[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5100,"I think the paper does a fairly good job at doing what it does,[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5103,"But then they just concept net to augment text.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5104,"This is quite a static strategy, I was assuming the authors are going to use some IR method over the web to back up their motivation.[strategy-NEG], [EMP-NEG]",strategy,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5105,"As is, I don't really see how this motivation has anything to do with getting things out of a KB.[motivation-NEG], [EMP-NEG]",motivation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5106,"A KB is usually a pretty static entity, and things are added to it at a slow pace.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5107,"* The author's main claim is that retrieving background knowledge and adding it when reading text can improve performance a little when doing QA and NLI.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5108,"Specifically they take text and add common sense knowledge from concept net.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5109,"The authors do a good job of showing that indeed the knowledge is important to gain this improvement through analysis.[job-POS], [EMP-POS]",job,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5110,"However, is this statement enough to cross the acceptance threshold of ICLR?[statement-NEU, acceptance threshold-NEU], [APR-NEU]",statement,acceptance threshold,,,,,APR,,,,,NEU,NEU,,,,,NEU,,,, 5111,"Seems a bit marginal to me.[null], [APR-NEG]",null,,,,,,APR,,,,,,,,,,,NEG,,,, 5112,"* The author's propose a specific way of incorporating knowledge into a machine reading algorithm through re-embeddings that have some unique properties of sharing embeddings across lemmas and also having some residual connections that connect embeddings and some processed versions of them.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5113,"To me it is unclear why we should use this method for incorporating background knowledge and not some simpler way.[method-NEG], [EMP-NEG, CLA-NEG]",method,,,,,,EMP,CLA,,,,NEG,,,,,,NEG,NEG,,, 5114,"For example, have another RNN read the assertions and somehow integrate that.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5115,"The process of re-creating embeddings seems like one choice in a space of many, not the simplest, and not very well motivated.[process-NEG], [EMP-NEG]",process,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5116,"There are no comparisons to other possibilities.[possibilities-NEG], [CMP-NEG]",possibilities,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 5117,"As a result, it is very hard for me to say anything about whether this particular architecture is interesting or is it just in general that background knowledge from concept net is useful.[result-NEG], [IMP-NEG]",result,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 5118,"As is, I would guess the second is more likely and so I am not convinced the architecture itself is a significant contribution.[contribution-NEG], [IMP-NEG]",contribution,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 5119,"So to conclude, the paper is well-written, clear, and has nice results and analysis.[paper-POS, results-POS, analysis-POS], [CLA-POS, EMP-NEG]",paper,results,analysis,,,,CLA,EMP,,,,POS,POS,POS,,,,POS,NEG,,, 5120,"The conclusion is that reading background knowledge from concept net boost performance using some architecture.[architecture-NEU], [EMP-NEU]",architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5121,"This is nice to know but I think does not cross the acceptance threshold.[acceptance threshold-NEG], [REC-NEG]]",acceptance threshold,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 5123,"On the positive side: This is the first paper to my knowledge that has shown that grid cells arise as a product of a navigation task demand.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5124,"I enjoyed reading the paper which is in general clearly written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5126,"On the negative side: The manuscript is not written in a way that is suitable for the target ICLR audience which will include, for the most part, readers that are not expert on the entorhinal cortex and/or spatial navigation.[manuscript-NEG], [CLA-NEG, APR-NEG]",manuscript,,,,,,CLA,APR,,,,NEG,,,,,,NEG,NEG,,, 5127,"First, the contributions need to be more clearly spelled out.[contributions-NEU], [CLA-NEU]",contributions,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 5128,"In particular, the authors tend to take shortcuts for some of their statements.[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 5130,"require hand-crafted and fined tuned connectivity patterns, and the evidence of such specific 2D connectivity patterns has been largely absent.[introduction-NEU], [SUB-NEG]",introduction,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 5131,""" This statement is problematic for two reasons: (i) It is rather standard in the field of computational neuroscience to start from reasonable assumptions regarding patterns of neural connectivity then proceed to show that the resulting network behaves in a sensible way and reproduces neuroscience data.[statement-NEG], [EMP-NEG]",statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5132,"This is not to say that demonstrating that these patterns can arise as a byproduct is not important, on the contrary.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5133,"These are just two complementary lines of work.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5134,"In the same vein, it would be silly to dismiss the present work simply because it lacks spikes.[present work-NEU], [EMP-NEU]",present work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5136,"of such specific 2D connectivity patterns. [previous work-NEU], [CMP-NEG]",previous work,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 5137,"My understanding is that one of the main assumptions made in previous work is that of a center-surround pattern of lateral connectivity.[assumptions-NEU, previous work-NEU], [EMP-NEU]",assumptions,previous work,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5138,"I would argue that there is a lot of evidence for local inhibitory connection in the cortex.[evidence-NEU], [EMP-NEU]",evidence,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5139,"Somewhat related to this point, it would be insightful to show the pattern of local connections learned in the RNN to see how it differs from the aforementioned pattern of connectivity[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 5140,". Second, the navigation task used needs to be better justified.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5141,"Why training a network to predict 2D spatial location from velocity inputs?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5142,"Why is this a reasonable starting point to study the emergence of grid cells?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5143,"It might be obvious to the authors but it will not be to the ICLR audience.[null], [APR-NEG]",null,,,,,,APR,,,,,,,,,,,NEG,,,, 5144,"Dead-reckoning (i.e., spatial localization from velocity inputs) is of critical ecological relevance for many animals.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5145,"This needs to be spelled out and a reference needs to be added.[reference-NEU], [SUB-NEG]",reference,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 5146,"As a side note, I would have expected the authors to use actual behavioral data but instead, the network is trained using artificial trajectories based on modified Brownian motion"".[data-NEU], [EMP-NEU]",data,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5147,"This seems like an important assumption of the manuscript but the issue is brushed off and not discussed.[assumption-NEG], [SUB-NEG]",assumption,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5148,"Why is this a reasonable assumption to make?[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5149,"Is there any reference demonstrating that rodent locomotory behavior in a 2D arena is random?[reference-NEU], [EMP-NEU]",reference,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5150,"Figure 4 seems kind of strange.[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5151,"I do not understand how the ""representative units"" are selected and where the ""late"" selectivity on the far right side in panel a arises if not from ""early"" units that would have to travel ""far"" from the left side... [null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 5153,"I found the study of the effect of regularization to be potentially the most informative for neuroscience but it is only superficially treated.[study-NEU], [EMP-NEU]",study,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5154,"It would have been nice to see a more systematic treatment of the specifics of the regularization needed to get grid cells. [null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 5156,"This paper presents an interesting idea to word embeddings that it combines a few base vectors to generate new word embeddings. [paper-NEU, idea-NEU], [NOV-NEU]",paper,idea,,,,,NOV,,,,,NEU,NEU,,,,,NEU,,,, 5157,"It also adopts an interesting multicodebook approach for encoding than binary embeddings.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5158,"The paper presents the proposed approach to a few NLP problems and have shown that this is able to significant reduce the size, increase compression ratio, and still achieved good accuracy.[proposed approach-POS, accuracy-POS], [IMP-POS, EMP-POS]",proposed approach,accuracy,,,,,IMP,EMP,,,,POS,POS,,,,,POS,POS,,, 5159,"The experiments are convincing and solid.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5160,"Overall I am weakly inclined to accept this paper.[paper-POS], [REC-POS]]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 5167,"The ideas are interesting,[ideas-POS], [EMP-POS]",ideas,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5169,"Major comments: 1. When dealing with a 2 layer network where there are 2 matrices W_1, W_2 to optimize over, It is not clear to me why optimizing over W_1 is a quasi-convex optimization problem?[comments-NEU], [EMP-NEG]",comments,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5170,"The authors seem to use the idea that solving a GLM problem is a quasi-convex optimization problem.[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5171,"However, optimizing w.r.t. W_1 is definitely not a GLM problem, since W_1 undergoes two non-linear transformations one via phi_1 and another via phi_2.[problem-NEU], [EMP-NEG]",problem,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5172,"Could the authors justify why minimizing w.r.t. W_1 is still a quasi-convex optimization problem?[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5174,"This is an interesting result, and useful in its own right.[result-POS], [EMP-POS]",result,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5175,"However, it is not clear to me why this result is even relevant here.[result-NEG], [SUB-NEG]",result,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5177,"However, GLMs are functions from R^d ---> R. So, it is not at all clear to me how Theorem 3.4, 3.5 and eventually 3.6 are useful for the autoencoder problem that the authors care about.[Theorem-NEU, problem-NEU], [EMP-NEG]",Theorem,problem,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 5178,"Yes they are useful if one was doing 2-layer neural networks for binary classification, but it is not clear to me how they are useful for autoencoder problems.[problems-NEU], [EMP-NEG]",problems,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5179,"3. Experimental results for classification are not convincing enough.[Experimental results-NEG], [EMP-NEG]",Experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5180,"If, one looks at Table 1. SGD outperforms DANTE on ionosphere dataset and is competent with DANTE on MNIST and USPS.[Table-NEU], [EMP-NEG, EMP-NEG]",Table,,,,,,EMP,EMP,,,,NEU,,,,,,NEG,NEG,,, 5181,"4. The results on reconstruction do not show any benefits for DANTE over SGD (Figure 3).[results-NEU, Figure-NEU], [EMP-NEG]",results,Figure,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 5182,"I would recommend the authors to rerun these experiments but truncate the iterations early enough.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5183,"If DANTE has better reconstruction performance than SGD with fewer iterations then that would be a positive result.[performance-NEU, result-NEU], [EMP-NEU]",performance,result,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5189,"This paper is concerned with both security and machine learning, but there is no clear contributions to either field.[contributions-NEG], [EMP-NEG]",contributions,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5190,"From the machine learning perspective, the proposed attacking method is standard without any technical novelty.[method-NEG], [NOV-NEG]",method,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5191,"From the security perspective, the scenarios are too simplistic.[scenarios-NEG], [EMP-NEG]",scenarios,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5192,"The encoding-decoding mechanism being attacked is too simple without any security enhancement.[mechanism-NEG], [EMP-NEG]",mechanism,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5193,"This is an unrealistic scenario.[scenario-NEG], [EMP-NEG]",scenario,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5194,"For applications with security concerns, there should have been methods to guard against man-in-the-middle attack, and the paper should have at least considered some of them.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5195,"Without considering the state-of-the-art security defending mechanism, it is difficult to judge the contribution of the paper to the security community.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5196,"I am not a security expert, but I doubt that the proposed method are formulated based on well founded security concepts and ideas.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5197,"For example, what are the necessary and sufficient conditions for an attacking method to be undetectable?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5198,"Are the criteria about the magnitude of epsilon given on Section 3.3. necessary and sufficient?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5199,"Is there any reference for them?[reference-NEU], [SUB-NEU]",reference,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5200,"Why do we require the correspondence between the classification confidence of tranformed and original data?[data-NEU], [EMP-NEU]",data,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5201,"Would it be enough to match the DISTRIBUTION of the confidence?[null], [EMP-NEU]]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5203,"The contribution is the addition of an explicit exemplar constraint into the formulation which allows best matches from the other domain to be retrieved.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5204,"The results show that the proposed method is superior for the task of exact correspondence identification and that AN-GAN rivals the performance of pix2pix with strong supervision.[results-POS, proposed method-POS], [EMP-POS]",results,proposed method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5205,"Negatives: 1.) The task of exact correspondence identification seems contrived.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5206,"It is not clear which real-world problems have this property of having both all inputs and all outputs in the dataset, with just the correspondence information between inputs and outputs missing.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5207,"2.) The supervised vs unsupervised experiment on Facades->Labels (Table 3) is only one scenario where applying a supervised method on top of AN-GAN's matches is better than an unsupervised method. [Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5208,"More transfer experiments of this kind would greatly benefit the paper and support the conclusion that ""our self-supervised method performs similarly to the fully supervised method.[experiments-NEU], [SUB-NEU, EMP-NEU]",experiments,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 5209,""" Positives: 1.) The paper does a good job motivating the need for an explicit image matching term inside a GAN framework.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5210,"2.) The paper shows promising results on applying a supervised method on top of AN-GAN's matches.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5212,"2. DiscoGAN should have the Kim et al citation, right after the first time it is used. I had to look up DiscoGAN to realize it is just Kim et al.[citation-NEG], [PNF-NEG]",citation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5215,". Authors formulate the active learning problem as core-set selection and present a novel strategy.[problem-POS], [NOV-POS]",problem,,,,,,NOV,,,,,POS,,,,,,POS,,,, 5216,"Experiments are performed on three datasets to validate the effectiveness of the proposed method comparing with some baselines.[Experiments-NEU, datasets-NEU, proposed method-NEU], [SUB-NEU, EMP-NEU]",Experiments,datasets,proposed method,,,,SUB,EMP,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 5217,"Theoretical analysis is presented to show the performance of any selected subset using the geometry of the data points.[Theoretical analysis-NEU], [SUB-NEU]",Theoretical analysis,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5218,"Authors are suggested to perform experiments on more datasets to make the results more convincing.[experiments-NEU, results-NEU], [SUB-NEU]",experiments,results,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 5219,"The initialization of the CNN model is not clearly introduced, which however, may affect the performance significantly.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5221,"I think I should understand the gist of the paper, which is very interesting, where the action of tilde Q(s,a) is drawn from a distribution.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5223,"All these seems very sound and interesting.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5224,"Weakness: 1. The major weakness is that throughout the paper, I do not see an algorithm formulation of the Smoothie algorithm, which is the major algorithmic contribution of the paper (I think the major contribution of the paper is on the algorithmic side instead of theoretical).[algorithm-NEG, contribution-NEG, paper-NEG], [EMP-NEG]",algorithm,contribution,paper,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 5225,"Such representation style is highly discouraging and brings about un-necessary readability difficulties.[style-NEG, readability-NEG], [PNF-NEG]",style,readability,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 5226,"2. Sec. 3.3 and 3.4 is a little bit abbreviated from the major focus of the paper, and I guess they are not very important and novel (just educational guess, because I can only guess what the whole algorithm Smoothie is).[Sec-NEG, paper-NEG], [NOV-NEG]",Sec,paper,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 5227,"So I suggest moving them to the Appendix and make the major focus more narrowed down.[Appendix-NEG], [PNF-NEG]]",Appendix,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5229,"Pros: Good empirical results.[empirical results-POS], [EMP-POS]",empirical results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5230,"Cons: There is not much technical contribution.[technical contribution-NEG], [IMP-NEG]",technical contribution,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 5231,"The proposed approach is neither well motivated, nor well presented/justified. The presentation of the paper needs to be improved.[proposed approach-NEG, presentation-NEG], [PNF-NEG, EMP-NEG]",proposed approach,presentation,,,,,PNF,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 5232,"1. Part of the motivation on page 1 does not make sense. In particular, for paragraph 3, if the classification task is just to separate A from B, then (1,0) separation should be better than (0.8, 0.2). [motivation-NEG, page-NEU], [EMP-NEG]",motivation,page,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5234,"The authors however ignored all the existing works on this topic, but enforce label embedding vectors as similarities between labels in Section 2.1 without clear motivation and justification.[existing works-NEG, Section-NEU], [CMP-NEG]",existing works,Section,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 5235,"This assumption is not very natural u2014 though label embeddings can capture semantic information and label correlations, it is unnecessary that label embedding matrix should be m xm and each entry should represent the similarity between a pair of labels.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5236,"The paper needs to provide a clear rationale/justification for the assumptions made, while clarifying the difference (and reason) from the literature works.[paper-NEU, literature works-NEU], [CMP-NEU, EMP-NEU]",paper,literature works,,,,,CMP,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 5237,"3. The proposed model is not well explained.[proposed model-NEU], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5238,"(1) By using the objective in eq.(14), how to learn the embeddings E?[eq-NEU], [EMP-NEU]",eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5239,"(2) The authors state ""In back propagation, the gradient from z2 is kept from propagating to h"". This makes the learning process quite arbitrary under the objective in eq.(14). [eq-NEU], [EMP-NEU]",eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5240,"(3) The label embeddings are not directly used for the classification (H(y, z'_1)), but rather as auxiliary part of the objective. How to decide the test labels?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5244,"The experiment results seem solid and the proposed structure is with simple design and highly generalizable.[experiment results-POS, proposed structure-POS], [EMP-POS]",experiment results,proposed structure,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5245,"The concern is that the contribution is quite incremental from the theoretical side[contribution-NEU], [SUB-NEU]",contribution,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5246,"though it involves large amount of experimental efforts, which could be impactful.[null], [IMP-POS, SUB-POS]",null,,,,,,IMP,SUB,,,,,,,,,,POS,POS,,, 5247,"Please see the major comment below. One major comment: - Despite that the work is more application oriented, the paper would have been stronger and more impactful if it includes more work on the theoretical side.[paper-NEU], [SUB-NEG, IMP-NEU]",paper,,,,,,SUB,IMP,,,,NEU,,,,,,NEG,NEU,,, 5248,"Specifically, for two folds: (1) in general, some more work in investigating the task space would be nice.[work-NEG], [SUB-NEG]",work,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5249,"The paper assumes the tasks are ""related"" or ""similar"" and thus transferrable; also particularly in Section 2, the authors define that the tasks follow the same distribution.[Section-NEU, tasks-NEU], [EMP-NEU]",Section,tasks,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5250,"But what exactly should the distribution be like to be learnable and how to quantify such ""related"" or ""similar"" relationship across tasks? [tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5251,"(2) in particular, for each of the experiments that the authors conduct, it would be nice to investigate some more on when the proposed TC + Attention network would work better and thus should be used by the community; some questions to answer include: when should we prefer the proposed combination of TC + attention blocks over the other methods?[experiments-NEU, methods-NEU], [SUB-NEU, EMP-NEU]",experiments,methods,,,,,SUB,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 5252,"The result from the paper seems to answer with ""in all cases"" but then that always brings the issue of ""overfitting"" or parameter tuning issue.[result-NEG, paper-NEG, issue-NEU], [EMP-NEG]",result,paper,issue,,,,EMP,,,,,NEG,NEG,NEU,,,,NEG,,,, 5253,"I believe the paper would have been much stronger if either of the two above are further investigated.[paper-NEU], [SUB-NEU]",paper,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5254,"More detailed comments: - On Page 1, ""the optimal strategy for an arbitrary range of tasks"" lacks definition of ""range""; also, in the setting in this paper, these tasks should share ""similarity"" or follow the same ""distribution"" and thus such ""arbitrariness"" is actually constrained.[Page-NEG, paper-NEG, tasks-NEG], [SUB-NEG]",Page,paper,tasks,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 5255,"- On Page 2, the notation and formulation for the meta-learning could be more mathematically rigid; the distribution over tasks is not defined.[Page-NEG, tasks-NEG], [EMP-NEG, SUB-NEG]",Page,tasks,,,,,EMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 5256,"It is understandable that the authors try to make the paradigm very generalizable; but the ambiguity or the abstraction over the ""task distribution"" is too large to be meaningful.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5257,"One suggestion would be to split into two sections, one for supervised learning and one for reinforcement learning; but both share the same design paradigm, which is generalizable.[sections-NEU, paradigm-NEU], [EMP-NEG]",sections,paradigm,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 5258,"- For results in Table 1 and Table 2, how are the confidence intervals computed?[results-NEU, Table-NEU], [EMP-NEU]",results,Table,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5259,"Is it over multiple runs or within the same run?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5260,"It would be nice to make clear; in addition, I personally prefer either reporting raw standard deviations or conduct hypothesis testing with specified tests.[tests-NEU], [SUB-NEU]",tests,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5261,"The confidence intervals may not be clear without elaboration; such is also concerning in the caption for Table 3 about claiming ""not statistically-significantly different"" because no significance test is reported.[Table-NEG], [SUB-NEG]",Table,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5262,"- At last, some more details in implementation would be nice (package availability, run time analysis); I suppose the package or the source code would be publicly available afterwards?[implementation-NEU, code-NEU], [SUB-NEU]",implementation,code,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 5266,"The technique is tested on deep stacks of recurrent layers, and on convolutional networks with depth of 28, showing that improved results over the baseline networks are obtained.[technique-POS, results-POS], [CMP-POS]",technique,results,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 5267,"Clarity: The paper is easy to read.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5268,"The plots in Fig. 2 and the appendix are quite helpful in improving presentation.[Fig-POS, appendix-POS, presentation-POS], [PNF-POS]",Fig,appendix,presentation,,,,PNF,,,,,POS,POS,POS,,,,POS,,,, 5269,"The experimental setups are explained in detail.[experimental setups-POS], [SUB-POS, EMP-POS]",experimental setups,,,,,,SUB,EMP,,,,POS,,,,,,POS,POS,,, 5271,"However, the experiments to support the idea do not seem to match the motivation of the paper.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5272,"As stated in the beginning of the paper, the motivation behind having close to zero mean activations is that this is expected to speed up training using gradient descent.[motivation-POS], [EMP-POS]",motivation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5273,"However, the presented results focus on the performance on held-out data instead of improvements in training speed.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5275,"For the CIFAR-10 experiment, the training loss curves do show faster initial progress in learning.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5276,"However, it is unclear that overall training time can be reduced with the help of this technique.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5277,"To evaluate this speed up effect, the dependence on the choice of learning rate and other hyperparameters should also be considered.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5278,"Nevertheless, it is interesting to note the result that the proposed approach converts a deep network that does not train into one which does in many cases.[result-NEU, proposed approach-NEU], [EMP-POS]",result,proposed approach,,,,,EMP,,,,,NEU,NEU,,,,,POS,,,, 5279,"The method appears to improve the training for moderately deep convolutional networks without batch normalization (although this is tested on a single dataset),;[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5280,"but is not practically useful yet since the regularization benefits of Batch Normalization are also taken away.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5284,"Similarly, the authors show an analogous result for deep neural networks with multiple hidden layers and an infinite number of hidden units per layer, and show the form of the resulting kernel functions.[result-NEU], [CMP-NEU]",result,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5287,"Overall, the work is an interesting read, and a nice follow-up to Neal's earlier observations about 1 hidden layer neural networks.[work-POS], [CMP-POS]",work,,,,,,CMP,,,,,POS,,,,,,POS,,,, 5288,"It combines several insights into a nice narrative about infinite Bayesian deep networks.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5289,"However, the practical utility, significance, and novelty of this work -- in its current form -- are questionable, and the related work sections, analysis, and experiments should be significantly extended.[work-NEG, related work sections-NEG, analysis-NEG, experiments-NEG], [NOV-NEG, IMP-NEG]",work,related work sections,analysis,experiments,,,NOV,IMP,,,,NEG,NEG,NEG,NEG,,,NEG,NEG,,, 5290,"In detail: (1) This paper misses some obvious connections and references, such as * Krauth et. al (2017): ""Exploring the capabilities and limitations of Gaussian process models"" for recursive kernels with GPs.[paper-NEG, connections-NEG, references-NEG], [SUB-NEG, CMP-NEG]",paper,connections,references,,,,SUB,CMP,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 5291,"* Hazzan & Jakkola (2015): ""Steps Toward Deep Kernel Methods from Infinite Neural Networks"" for GPs corresponding to NNs with more than one hidden layer.[null], [SUB-NEG, CMP-NEG]",null,,,,,,SUB,CMP,,,,,,,,,,NEG,NEG,,, 5292,"* The growing body of work on deep kernel learning, which ""combines the inductive biases and representation learning abilities of deep neural networks with the non-parametric flexibility of Gaussian processes"". E.g.: (i) ""Deep Kernel Learning"" (AISTATS 2016);[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5293,"(ii) ""Stochastic Variational Deep Kernel Learning"" (NIPS 2016);[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5294,"(iii) ""Learning Scalable Deep Kernels with Recurrent Structure"" (JMLR 2017).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5295,"These works should be discussed in the text.[works-NEG], [SUB-NEG, CMP-NEG]",works,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 5296,"(2) Moreover, as the authors rightly point out, covariance functions of the form used in (4) have already been proposed.[authors-NEU, functions-NEG], [NOV-NEU]",authors,functions,,,,,NOV,,,,,NEU,NEG,,,,,NEU,,,, 5297,"It seems the novelty here is mainly the empirical exploration (will return to this later), and numerical integration for various activation functions.[novelty-POS], [NOV-POS]",novelty,,,,,,NOV,,,,,POS,,,,,,POS,,,, 5298,"That is perfectly fine -- and this work is still valuable.[work-POS], [IMP-POS]",work,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5299,"However, the statement ""recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework"" is incorrect.[statement-NEG], [EMP-NEG]",statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5300,"For example, Hazzan & Jakkola (2015) in ""Steps Toward Deep Kernel Methods from Infinite Neural Networks"" consider GP constructions with more than one hidden layer.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5301,"Thus the novelty of this aspect of the paper is overstated.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5303,"later on the presentation.[presentation-NEU], [PNF-NEU]",presentation,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5304,"In any case, the derivation for computing the covariance function (4) of a multi-layer network is a very simple reapplication of the procedure in Neal (1994).[procedure-NEG], [PNF-NEG, EMP-NEG]",procedure,,,,,,PNF,EMP,,,,NEG,,,,,,NEG,NEG,,, 5305,"What is less trivial is estimating (4) for various activations, and that seems to the major methodological contribution.[contribution-POS], [EMP-POS]",contribution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5306,"Also note that multidimensional CLT here is glossed over.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5307,"It's actually really unclear whether the final limit will converge to a multidimensional Gaussian with that kernel without stronger conditions.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5308,"This derivation should be treated more thoroughly and carefully.[derivation-NEG], [EMP-NEG]",derivation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5309,"(3) Most importantly, in this derivation, we see that the kernels lose the interesting representations that come from depth in deep neural networks.[derivation-NEG, representations-NEG], [EMP-NEG]",derivation,representations,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5310,"Indeed, Neal himself says that in the multi-output settings, all the outputs become uncorrelated.[outputs-NEG], [CMP-NEU]",outputs,,,,,,CMP,,,,,NEG,,,,,,NEU,,,, 5313,"In Neal's case, the method was explored for single output regression, where the fact that we lose this sharing of basis functions may not be so restrictive.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5314,"However, these assumptions are very constraining for multi-output classification and also interesting multi-output regressions.[assumptions-POS], [EMP-POS]",assumptions,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5316,"""Deep neural networks without training deep networks"".[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5317,"This is not an accurate portrayal.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5320,"In this sense, the presentation should be re-worked.[presentation-NEG], [PNF-NEG]",presentation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5321,"(4) Moreover, neural networks are mostly interesting because they learn the representation.[neural networks-POS], [EMP-POS]",neural networks,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5323,"But here, essentially no kernel learning is happening.[learning-NEG], [EMP-NEG]",learning,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5325,"(5) Given the above considerations, there is great importance in understanding the practical utility of the proposed approach through a detailed empirical evaluation.[approach-NEU, empirical evaluation-NEU], [EMP-NEU]",approach,empirical evaluation,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5326,"In other words, how structured is this prior and does it really give us some of the interesting properties of deep neural networks, or is it mostly a cute mathematical trick?[properties-NEU], [EMP-NEU]",properties,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5327,"Unfortunately, the empirical evaluation is very preliminary, and provides no reassurance that this approach will have any practical relevance:[empirical evaluation-NEG, approach-NEU], [EMP-NEG]",empirical evaluation,approach,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5328,"(i) Directly performing regression on classification problems is very heuristic and unnecessary.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5329,"(ii) Given the loss of dependence between neurons in this approach, it makes sense to first explore this method on single output regression, where we will likely get the best idea of its useful properties and advantages.[approach-NEU, properties-NEU, advantages-NEU], [EMP-NEU]",approach,properties,advantages,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 5330,"(iii) The results on CIFAR10 are very poor.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5331,"We don't need to see SOTA performance to get some useful insights in comparing for example parametric vs non-parametric, but 40% more error than SOTA makes it very hard to say whether any of the observed patterns hold weight for more competitive architectural choices.[insights-NEU, error-NEG], [CMP-NEG, EMP-NEG]",insights,error,,,,,CMP,EMP,,,,NEU,NEG,,,,,NEG,NEG,,, 5332,"A few more minor comments: (i) How are you training a GP exactly on 50k training points?[training points-NEU], [EMP-NEU]",training points,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5333,"Even storing a 50k x 50k matrix requires about 20GB of RAM.[matrix-NEU], [EMP-NEU]",matrix,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5334,"Even with the best hardware, computing the marginal likelihood dozens of times to learn hyperparameters would be near impossible.[hyperparameters-NEU, hardware-NEG], [EMP-NEG]",hyperparameters,hardware,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 5335,"What are the runtimes?[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 5336,"(ii) One benefit in using the GP is due to its Bayesian nature, so that predictions have uncertainty estimates (Equation (9)).[benefit-POS], [EMP-POS]",benefit,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5337,""" The main benefit of the GP is not the uncertainty in the predictions, but the marginal likelihood which is useful for kernel learning.[main benefit-NEU], [EMP-NEU]]",main benefit,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5340,"Most of the originality comes from integrating time decay of purchases into the learning framework.[originality-POS], [NOV-POS]",originality,,,,,,NOV,,,,,POS,,,,,,POS,,,, 5341,"Rest of presented work is more or less standard.[work-NEU], [NOV-NEU]",work,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 5342,"Paper may be useful to practitioners who are looking to implement something like this in production.[Paper-POS], [IMP-POS]",Paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5344,"This paper presents an simple and interesting idea to improve the performance for neural nets.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5345,"The idea is we can reduce the precision for activations and increase the number of filters, and is able to achieve better memory usage (reduced).[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5346,"The paper is aiming to solve a practical problem, and has done some solid research work to validate that.[research-NEU], [EMP-NEU]",research,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5347,"In particular, this paper has also presented a indepth study on AlexNet with very comprehensive results and has validated the usefulness of this approach.[paper-POS, results-POS, approach-POS], [CMP-POS, EMP-POS]",paper,results,approach,,,,CMP,EMP,,,,POS,POS,POS,,,,POS,POS,,, 5348,"In addition, in their experiments, they have demonstrated pretty solid experimental results, on AlexNet and even deeper nets such as the state of the art Resnet.[experiments-POS, experimental results-POS], [EMP-POS]",experiments,experimental results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5349,"The results are convincing to me.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5350,"On the other side, the idea of this paper does not seem extremely interesting to me, especially many decisions are quite natural to me, and it looks more like a very empirical practical study.[idea-NEG, paper-NEG], [EMP-NEG]",idea,paper,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5351,"So the novelty is limited.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5352,"So overall given limited novelty but the paper presents useful results,[paper-NEG, results-POS], [EMP-POS, NOV-NEG]",paper,results,,,,,EMP,NOV,,,,NEG,POS,,,,,POS,NEG,,, 5353,"I would recommend borderline leaning towards reject.[null], [REC-NEG]]",null,,,,,,REC,,,,,,,,,,,NEG,,,, 5358,"Manually designing novel neural architectures is a laborious, time-consuming process.[process-NEG], [EMP-NEG]",process,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5360,"Overall, the paper is well-written, clear in its exposition and technically sound.[paper-POS], [EMP-POS, CLA-POS, PNF-POS]",paper,,,,,,EMP,CLA,PNF,,,POS,,,,,,POS,POS,POS,, 5361,"While some hyperparameter and design choices could perhaps have been justified in greater detail, the paper is mostly self-contained and provides enough information to be reproducible.[detail-NEG, paper-POS, information-POS], [CLA-NEU, SUB-NEG]",detail,paper,information,,,,CLA,SUB,,,,NEG,POS,POS,,,,NEU,NEG,,, 5363,"Compared to existing work, this approach should emphasise modularity, making it easier for the evolutionary search algorithm to discover architectures that extensively reuse simpler blocks as part of the model.[existing work-NEG, approach-NEG], [CMP-NEG, SUB-NEG]",existing work,approach,,,,,CMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 5365,"but it is to the best of my knowledge the first explicit application of this idea in neural architecture search.[application-POS], [NOV-POS]",application,,,,,,NOV,,,,,POS,,,,,,POS,,,, 5366,"Nevertheless, while the idea behind the proposed approach is definitely interesting,[idea-POS, proposed approach-POS], [EMP-POS]",idea,proposed approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5367,"I believe that the experimental results do not provide sufficiently compelling evidence that the resulting method substantially outperforms the non-hierarchical, flat representation of architectures used in other publications.[results-NEG, evidence-NEG, resulting method-NEG, other publications-NEG], [EMP-NEG, SUB-NEG, CMP-NEG]",results,evidence,resulting method,other publications,,,EMP,SUB,CMP,,,NEG,NEG,NEG,NEG,,,NEG,NEG,NEG,, 5368,"In particular, the results highlighted in Figure 3 and Table 1 seem to indicate that the difference in performance between both paradigms is rather small.[results-NEG, Figure-NEG, Table-NEG, performance-NEG], [EMP-NEG]",results,Figure,Table,performance,,,EMP,,,,,NEG,NEG,NEG,NEG,,,NEG,,,, 5369,"Moreover, the performance gap between the flat and hierarchical representations of the search space, as reported in Table 1, remains smaller than the performance gap between the best performing of the approaches proposed in this article and NASNet-A (Zoph et al., 2017), as reported in Tables 2 and 3.[performance gap-NEG, Table-NEG, performance gap-NEG, approaches proposed-NEG, Tables-NEG], [CMP-NEG, EMP-NEG]",performance gap,Table,performance gap,approaches proposed,Tables,,CMP,EMP,,,,NEG,NEG,NEG,NEG,NEG,,NEG,NEG,,, 5370,"Another concern I have is regarding the definition of the mutation operators in Section 3.1. While not explicitly stated, I assume that all sampling steps are performed uniformly at random (otherwise please clarify it).[Section-NEG], [SUB-NEG]",Section,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5371,"If that was indeed the case, there is a systematic asymmetry between the probability to add and remove an edge, making the former considerably more likely.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5372,"This could bias the architectures towards fully-connected DAGs, as indeed seems to occur based on the motifs reported in Appendix A.[Appendix-NEG], [EMP-NEG]",Appendix,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5373,"Finally, while the main motivation behind neural architecture search is to automatise the design of new models, the approach here presented introduces a non-negligible number of hyperparameters that could potentially have a considerable impact and need to be selected somehow.[approach-NEG, hyperparameters-NEG], [EMP-NEG]",approach,hyperparameters,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5375,"I believe the paper would be substantially strengthened if the authors explored how robust the resulting approach is with respect to perturbations of these hyperparameters, and/or provided users with a principled approach to select reasonable values.[resulting approach-NEU], [EMP-NEU]",resulting approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5382,"The derivations look correct to me.[derivations-POS], [EMP-POS]",derivations,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5383,"In the experiments, the proposed algorithm was compared to other methods, e.g., A-NICE-MC and HMC.[experiments-POS, methods-POS], [CMP-POS]",experiments,methods,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 5384,"It showed that the proposed method could mix between the modes in the posterior.[proposed method-POS], [CMP-POS, EMP-POS]",proposed method,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 5385,"Although the method could mix well when applied to those particular experiments,[method-POS, experiments-POS], [EMP-POS]",method,experiments,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5386,"it lacks theoretical justifications why the method could mix well.[justifications-NEG, method-NEU], [SUB-NEG]]",justifications,method,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 5389,"The proposed approach is interesting,[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5390,"but I feel that the experimental section does not serve to show its merits for several reasons.[experimental section-NEG], [EMP-NEG]",experimental section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5391,"First, it does not demonstrate increased scalability.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5392,"Only 1024 examples are considered, which is by no means large.[examples-NEG], [SUB-NEG]",examples,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5393,"Even then, the authors approach selects the highest number of examples (figure 4). [approach-NEU], [SUB-NEU]",approach,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5394,"CEGIS both selects fewer examples and has a shorter median time for complete synthesis.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 5395,"Intuitively, the authors' method should scale better, but they fail to show this -- a missed opportunity to make the paper much more compelling.[method-NEG, paper-NEG], [EMP-NEG, SUB-NEG]",method,paper,,,,,EMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 5396,"This is especially true as a more challenging benchmark could be created very easily by simply scaling up the image.[benchmark-NEU], [IMP-NEU]",benchmark,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 5397,"Second, there is no analysis of the representativeness of the found sets of constraints.[analysis-NEG], [SUB-NEG]",analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5398,"Given that the results are very close to other approaches, it remains unclear whether they are simply due to random variations, or whether the proposed approach actually achieves a non-random improvement.[results-NEG, approach-NEU], [EMP-NEG]",results,approach,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5399,"In addition to my concerns about the experimental evaluation, I have concerns about the general approach.[approach-NEU], [EMP-NEG]",approach,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5400,"It is unclear to me that machine learning is the best approach for modeling and solving this problem.[problem-NEU], [EMP-NEG]",problem,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5401,"In particular, the selection probability of any particular example could be estimated through a heuristic, for example by simply counting the number of neighbouring examples that have a different color, weighted by whether they are in the set of examples already, to assess its borderness, with high values being more important to achieve a good program.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5403,"The above heuristic is obviously specific to the domain, but similar heuristics could be easily constructed for other domains.[domain-NEU], [EMP-NEU]",domain,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5404,"I feel that this is something the authors should at least compare to in the empirical evaluation.[empirical evaluation-NEU], [CMP-NEU]",empirical evaluation,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5405,"Another concern is that the authors' approach assumes that all parameters have the same effect.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5406,"Even for the example the authors give in section 2, it is unclear that this would be true.[example-NEG, section-NEG], [EMP-NEG]",example,section,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5407,"The text says that rand+cegis selects 70% of examples of the proposed approach, but figure 4 seems to suggest that the numbers are very close -- is this initial examples only?[approach-NEU, examples-NEU, figure-NEU], [EMP-NEU]",approach,examples,figure,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 5408,"Overall the paper appears rushed -- the acknowledgements section is left over from the template and there is a reference to figure blah.[acknowledgements section-NEG, reference-NEG], [PNF-NEG]",acknowledgements section,reference,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 5409,"There are typos and grammatical mistakes throughout the paper.[typos-NEG, grammatical mistakes-NEG], [CLA-NEG]",typos,grammatical mistakes,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 5410,"The reference to Model counting is incomplete.[reference-NEG], [SUB-NEG]",reference,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5411,"In summary, I feel that the paper cannot be accepted in its current form.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 5414,"While the intuition is nice and interesting,[intuition-POS], [EMP-POS]",intuition,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5415,"the paper is not very clear in describing the attack and the experiments do not appropriately test whether this method actually provides robustness.[paper-NEG, experiments-NEG, method-NEU], [CLA-NEG, EMP-NEG]",paper,experiments,method,,,,CLA,EMP,,,,NEG,NEG,NEU,,,,NEG,NEG,,, 5416,"Details: have been successfully in anomaly detection --> have been successfully used in anomaly detectionP[null], [PNF-POS]",null,,,,,,PNF,,,,,,,,,,,POS,,,, 5417,"The adversary would select a random subset of anomalies, push them towards the normal data cloud and inject these perturbed points into the training set -- This seems backwards.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5418,"As in the example that follows, if the adversary wants to make anomalies seem normal at test time, it should move normal points outward from the normal point cloud (eg making a 9 look like a weird 7).[example-NEU], [EMP-NEG]",example,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5419,"As s_attack increases, the anomaly data points are moved farther away from the normal data cloud, altering the position of the separating hyperplane. -- This seems backwards from Fig 2.[Fig-NEU], [EMP-NEU]",Fig,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5420,"From (a) to (b) the red points move closer to the center while in (c) they move further away (why?).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5421,"The blue points seem to consistently become more dense from (a) to (c).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5422,"The attack model is too rough.[attack model-NEG], [EMP-NEG]",attack model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5423,"It seems that without bounding D, we can make the model arbitrarily bad, no? [model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5424,"Assumption 1 alludes to this but doesn't specify what is small?[Assumption-NEU], [EMP-NEU]",Assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5425,"Also the attack model is described without considering if the adversary knows the learner's algorithm.[attack model-NEU], [EMP-NEU]",attack model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5426,"Even if there is randomness, can the adversary take actions that account for that randomness?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5427,"Does selecting a projection based on compactness remove the randomness?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5428,"Experiments -- why/how would you have distorted test data?[Experiments-NEU], [EMP-NEU]",Experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5429,"Making an anomaly seem normal by distorting it is easy.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5430,"I don't see experiments comparing having random projections and not.[experiments-NEG], [CMP-NEG]",experiments,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 5431,"This seems to be the fundamental question -- do random projects help in the train_D | test_C case?[question-NEU], [EMP-NEU]",question,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5432,"Experiments don't vary the attack much to understand how robust the method is.[Experiments-NEG], [EMP-NEG]",Experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5434,"() Summary In this paper, the authors introduced a new simple model for text classification, which obtains state of the art results on several benchmark. [model-POS, results-POS], [NOV-POS, CMP-POS]",model,results,,,,,NOV,CMP,,,,POS,POS,,,,,POS,POS,,, 5435,"The main contribution of the paper is to propose a new technique to learn vector representation of fixed-size text regions of up to a few words.[technique-NEU], [IMP-NEU, NOV-NEU]",technique,,,,,,IMP,NOV,,,,NEU,,,,,,NEU,NEU,,, 5441,"The authors then compare their approach to previous work on the 8 datasets introduced by Zhang et al. (2015).[previous work-NEU], [CMP-NEU]",previous work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5445,"() Discussion Overall, I think that the proposed method is sound and well justified.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5446,"The empirical evaluations, analysis and comparisons to existing methods are well executed.[empirical evaluations-POS, analysis-POS, comparisons-POS], [EMP-POS, CMP-POS]",empirical evaluations,analysis,comparisons,,,,EMP,CMP,,,,POS,POS,POS,,,,POS,POS,,, 5447,"I liked the fact that the proposed model is very simple, yet very competitive compared to the state-of-the-art.[proposed model-POS], [EMP-POS]",proposed model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5448,"I suspect that the model is also computationally efficient: can the authors report training time for different datasets?[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5450,"One of the main limitations of the model, as stated by the authors, is its number of parameters.[limitations-NEG], [EMP-NEG]",limitations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5451,"Could the authors also report these?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5452,"While the paper is fairly easy to read (because the method is simple and Figure 1 helps understanding the model), I think that copy editing is needed.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5453,"Indeed, the papers contains many typos (I have listed a few), as well as ungrammatical sentences.[typos-NEG], [CLA-NEG]",typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5454,"I also think that a discussion of the attention is all you need paper by Vaswani et al. is needed, as both articles seem strongly related.[discussion-NEU], [CMP-NEU, SUB-NEU]",discussion,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 5455,"As a minor comment, I advise the authors to use a different letter for word embeddings and the projected word embeddings (equation at the bottom of page 3).[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 5456,"It would also make the paper more clear.[paper-NEU], [CLA-NEU]",paper,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 5457,"() Pros / Cons: + simple yet powerful method for text classification[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5458,"+ strong experimental results[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5459,"+ ablation study / analysis of influence of parameters[ablation study-POS, analysis-POS], [EMP-POS]",ablation study,analysis,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5460,"- writing of the paper[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5461,"- missing discussion to the attention is all you need paper, which seems highly relevant[discussion-NEG], [SUB-NEG, CMP-NEG]",discussion,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 5462,"() Typos: Page 1 a support vectors machineS -> a support vector machine[Typos-NEG], [PNF-NEG]",Typos,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5463,"performs good -> performs well[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5464,"the n-grams was widely -> -grams were widely[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5465,"to apply large region size -> to apply to large region size are trained separately -> do not share parameters[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5466,"Page 2 convolutional neural networks(CNN) -> convolutional neural networks (CNN)[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5467,"related works -> related work [null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5468,"effective in Wang and Manning -> effective by Wang and Manning[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5469,"applied on text classification -> applied to text classification[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5470,"shard(word independent) -> shard (word independent)[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5471,"Page 3 can be treat -> can be treated fixed length continues subsequence -> fixed length contiguous subsequence[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5472,"w_i stands for the -> w_i standing for the which both the unit -> where both the unit in vocabulary -> in the vocabulary etc...[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5475,"Via performing variational inference in a kind of online manner, one can address continual learning for deep discriminative or generative networks with considerations of model uncertainty.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5476,"The paper is written well,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5477,"and literature review is sufficient.[literature review-NEG], [CMP-NEG, SUB-NEG]",literature review,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 5478,"My comment is mainly about its importance for large-scale computer vision applications.[importance-NEU], [IMP-NEU]",importance,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 5479,"The neural networks in the experiments are shallow. [experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5482,". The derivation and analysis seems correct.[derivation-POS, analysis-POS], [EMP-POS]",derivation,analysis,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5483,"However, it is well-known that spectral algorithm is not robust to model mis-specification.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5484,"It is not clear whether the proposed algorithm will be useful in practice.[proposed algorithm-NEG], [EMP-NEG]",proposed algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5485,"How will the method compare to EM algorithms and neural network based approaches? [method-NEU], [CMP-NEU]",method,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5491,"Clarity The paper is clear and well-written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5492,"Originality This idea is not new: if we search for Lipschitz constant estimation in google scholar, we get for example Wood, G. R., and B. P. Zhang. Estimation of the Lipschitz constant of a function. (1996) which presents a similar algorithm (i.e., estimation of the maximum slope with reverse Weibull).[idea-NEG], [NOV-NEG]",idea,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5493,"Technical quality The main theoretical result in the paper is the analysis of the lower-bound on delta, the smallest perturbation to apply on a data point to fool the network.[theoretical result-NEU], [EMP-NEU]",theoretical result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5494,"This result is obtained almost directly by writing the bound on Lipschitz-continuous function | f(y)-f(x) | < L || y-x || where x x_0 and y x_0 + delta.[null], [NOV-NEG, EMP-NEU]",null,,,,,,NOV,EMP,,,,,,,,,,NEG,NEU,,, 5496,"Moreover, a Lipschitz-continuous function does not need to be differentiable at all (e.g. |x| is Lipschitz with constant 1 but sharp at x 0).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5497,"Indeed, this constant can be easier obtained if the gradient exists, but this is not a requirement.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5498,"- (Flaw?) Theorem 3.2 : This theorem works for fixed target-class since g f_c - f_j for fixed g. However, once g min_j f_c - f_j, this theorem is not clear with the constant Lq. Indeed, the function g should be g(x) min_{k eq c} f_c(x) - f_k(x).[Theorem-NEG], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5499,"Thus its Lipschitz constant is different, potentially equal to L_q max_{k} | L_q^k |, where L_q^k is the Lipschitz constant of f_c-f_k.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5500,"If the theorem remains unchanged after this modification, you should clarify the proof.[theorem-NEU], [EMP-NEU]",theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5501,"Otherwise, the theorem will work with the maximum over all Lipschitz constants but the theoretical result will be weakened.[theoretical result-NEU], [EMP-NEU]",theoretical result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5502,"- Theorem 4.1: I do not see the purpose of this result in this paper. This should be better motivated.[Theorem-NEG], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5503,"Numerical experiments Globally, the numerical experiments are in favor of the presented method.[experiments-POS, method-POS], [EMP-POS]",experiments,method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5504,"The authors should also add information about the time it takes to compute the bound, the evolution of the bound in function of the number of samples and the distribution of the relative gap between the lower-bound and the best adversarial example.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5505,"Moreover, the numerical experiments look to be realized in the context of targeted attack.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5506,"To show the real effectiveness of the approach, the authors should also show the effectiveness of the lower-bound in the context of non-targeted attack.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5507,"####################################################### Post-rebuttal review --------------------------- Given the details the authors provided to my review, I decided to adjust my score.[score-NEU], [REC-NEU]",score,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 5508,"The method is simple and shows to be extremely effective/accurate in practice.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5509,"Detailed answers: 1) Indeed, I was not aware that the paper only focuses on one dimensional functions.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5510,"However, they still work with less assumption, i.e., with no differential functions.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5511,"I was pointing out the similarities between their approach and your: the two algorithms (CLEVER and Slope) are basically the same, and using a limit you can go from slope to gradient norm.[approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5512,"In any case, I have read the revision and the additional numerical experiment to compare Clever with their method is a good point.[experiment-POS], [CMP-POS]",experiment,,,,,,CMP,,,,,POS,,,,,,POS,,,, 5513,"2) Overall, our analysis is simple and more intuitive, and we further facilitate numerical calculation of the bound by applying the extreme value theory in this work.[analysis-NEU], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5514,"This is right. I am just surprised is has not been done before, since it requires only few lines of derivation.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 5516,"Moreover, this leads to good performances, so there is no needs to have something more complex.[performances-POS], [EMP-POS]",performances,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5517,"3) The usual Lipschitz continuity is defined in terms of L2 norm and the extension to an arbitrary Lp norm is not straightforward Indeed, people usually use the Lipschitz continuity using the L2norm, but the original definition is wider.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5518,"Quickly, if you have a differential, scalar function from a space E -> R, then the gradient is a function from space E to E*, the dual of the space E.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5519,"Let || . || the norm of space E. Then, || . ||* is the dual norm of ||.||, and also the norm of E*. In that case, Lipschitz continuity writes f(x)-f(y) < L || x-y ||, with L > max_{x in E*} || f'(x) ||* In the case where || . || is an ell-p norm, then || . ||* is an ell-q norm; with 1/p+1/q 1.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5521,"I have no additional remarks for 4) -> 9), since everything is fixed in the new version of the paper.[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 5524,"On one hand, fleet management is an interesting and important problem.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5525,"On the other hand, although the experiments are well designed and illustrative, the approach is only tested in a small 7x7 grid and 2 agents and in a 10x10 grid with 4 agents.[experiments-POS, approach-NEG], [EMP-NEG]",experiments,approach,,,,,EMP,,,,,POS,NEG,,,,,NEG,,,, 5526,"In spirit, these simulations are similar to those in the original paper by M. Egorov.[simulations-NEU, original paper-NEU], [CMP-NEU]",simulations,original paper,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 5527,"Since the main contribution is to use an existing algorithm to tackle a practical application, it would be more interesting to tweak the approach until it is able to tackle a more realistic scenario (mainly larger scale, but also more realistic dynamics with traffic models, real data, etc.).[main contribution-NEU, approach-NEU], [EMP-NEU]",main contribution,approach,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5528,"Simulation results compare MADQN with Dijkstra's algorithm as a baseline, which offers a myopic solution where each agent picks up the closest customer.[results-NEU, solution-NEU], [CMP-NEU]",results,solution,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 5529,"Again, since the main contribution is to solve a specific problem, it would be worthy to compare with a more extensive benchmark, including state of the art algorithms used for this problem (e.g., heuristics and metaheuristics).[main contribution-NEU, benchmark-NEG, problem-NEU], [CMP-NEU]",main contribution,benchmark,problem,,,,CMP,,,,,NEU,NEG,NEU,,,,NEU,,,, 5530,"The paper is clear and well written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5533,"are bad formatted).[typos-NEU, formatting errors-NEG], [PNF-NEU]",typos,formatting errors,,,,,PNF,,,,,NEU,NEG,,,,,NEU,,,, 5534,"-- Comments and questions to the authors: 1. In the introduction, please, could you add references to what is called traditional solutions?[introduction-NEU, references-NEU], [SUB-NEU]",introduction,references,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 5535,"n 2. Regarding the partial observability, each agent knows the location of all agents, including itself, and the location of all obstacles and charging locations; but it only knows the location of customers that are in its vision range.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5536,"This assumption seems reasonable if a central station broadcasts all agents' positions and customers are only allowed to stop vehicles in the street, without ever contacting the central station; otherwise if agents order vehicles in advance (e.g., by calling or using an app) the central station should be able to communicate customers locations too.[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5537,"On the other hand, if no communication with the central station is allowed, then positions of other agents may be also partial observable.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5538,"In other words, the proposed partial observability assumption requires some further motivation.[assumption-NEG], [SUB-NEG]",assumption,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5539,"Moreover, in Sec. 4.3, it is said that agents can see around them +10 spaces away; however, experiments are run in 7x7 and 10x10 grid worlds, meaning that the agents are able to observe the grid completely.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5540,"3. The fact that partial observability helped to alleviate the credit-assignment noise caused by the missing customer penalty might be an artefact of the setting.[noise-NEU, setting-NEU], [EMP-NEU]",noise,setting,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5541,"For instance, since the reward has been designed arbitrarily, it could have been defined as giving a penalty for those missing customers that are at some distance of an agent.[reward-NEU], [EMP-NEU]",reward,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5542,"4. Please, could you explain the last sentence of Sec. 4.3 that says The drawback here is that the agents will not be able to generalize to other unseen maps that may have very different geographies. In particular, how is this sentence related to partial observability?[explain-NEU, drawback-NEU], [EMP-NEU]",explain,drawback,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5546,"A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation.[aspect-POS, model-POS], [EMP-POS]",aspect,model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5547,"This is also not dissimilar to ideas used in 'Bayesian Representation Learning With Oracle Constraints' Karaletsos et al 2016 where similar contextual features c are learned to disentangle representations over observations and implicit supervision.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 5549,"However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5550,"The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach.[experiments-NEG, approach-NEG], [SUB-NEG]",experiments,approach,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5551,"In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization.[work-NEU], [SUB-NEU]",work,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5552,"This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived,[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5553,"the empirical evaluation is on the weak side and the rich properties of the model are not really shown off.[empirical evaluation-NEG, model-NEG], [SUB-NEG, EMP-NEG]",empirical evaluation,model,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 5554,"It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings.[experiment-NEG, empirical results-NEG], [SUB-NEG, EMP-NEG]",experiment,empirical results,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 5555,"Can the model recover richer structure that was imposed during data generation?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5556,"How limiting is the learning of a?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5557,"How does the likelihood of the model behave under the circumstances?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5558,"The experiments do not really convey how well this all will work in practice.[experiments-NEG], [EMP-NEG]]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5560,"The model achieves good results on bAbI compared to memory networks and the relation network model.[results-POS], [CMP-POS, EMP-POS]",results,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 5563,"I found it difficult to understand how the model is related to relation networks, since it no longer scores every combination of objects (or, in the case of bAbI, sentences), which is the fundamental idea behind relation networks.[model-NEU], [EMP-NEG]",model,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5564,"Why is the approach not evaluated on CLEVR, in which the interaction between two objects is perhaps more critical (and was the main result of the original relation networks paper)?[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5565,"The fact that the model works well on bAbI despite its simplicity is interesting, but it feels like the paper is framed to suggest that object-object interactions are not necessary to explicitly model, which I can't agree with based solely on bAbI experiments.[model-NEG], [SUB-NEG, EMP-NEU]",model,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEU,,, 5566,"I'd encourage the authors to do a more detailed experimental study with more tasks,;[experimental study-NEU], [SUB-NEU]",experimental study,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5567,"but I can't recommend this paper's acceptance in its current form.[acceptance-NEG], [REC-NEG]",acceptance,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 5568,"other questions / comments: - we use MLP to produce the attention weight without any extrinsic computation between the input sentence and the question. isn't this statement false because the attention computation takes as input the concatenation of the question and sentence representation?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5569,"- writing could be cleaned up for spelling / grammar (e.g., last 70 stories instead of last 70 sentences), currently the paper is very hard to read and it took me a while to understand the model[writing-NEG, paper-NEG], [CLA-NEG]",writing,paper,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 5572,"This has the advantage that the training process can be better parallelized, allowing for faster training if hundreds of GPUs are available for a short time.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5576,"COMMENTS: The paper presents a simple observation that seems very relevant especially as computing resources are becoming increasingly available for rent on short time scales.[observation-NEU], [IMP-POS, EMP-POS]",observation,,,,,,IMP,EMP,,,,NEU,,,,,,POS,POS,,, 5577,"The observation is explained well and substantiated by clear experimental evidence.[observation-POS, experimental evidence-POS], [CLA-POS, EMP-POS]",observation,experimental evidence,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 5582,"This effect is well known, but it can easily be remedied.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5586,"Couldn't the same or a very similar trick be used to correctly rescale $A$ every time one increases the batch size?.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5587,"It would be great to see the equivalent of Figure 7 with correctly rescaled $A$.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5588,"Minor issues: * The last paragraph of Section 5 refers to a figure 8, which appears to be missing. [Section-NEG, figure-NEG], [PNF-NEG, SUB-NEG]",Section,figure,,,,,PNF,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 5589,"* In Eqs. 4 & 5, the momentum parameter $m$ is not yet defined (it will be defined in Eqs. 6 & 7 below).[Eqs-NEG], [SUB-NEG]",Eqs,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5590,"* It appears that a minus sign is missing in Eq. 7.[Eq-NEG], [PNF-NEG]",Eq,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5593,"This suggests that the number of updates in this segment was chosen unnecessarily large to begin with.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5594,"It is therefore not surprising that reducing the number of updates does not deteriorate the test set accuracy.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5595,"* It would be interesting to see a version of figure 5 where the horizontal axis is the number of epochs.[figure-NEU], [PNF-NEU]",figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5602,"Therefore, the novelty of the proposed method is somewhat weak.[novelty-NEG, proposed method-NEG], [NOV-NEG]",novelty,proposed method,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 5604,"However, the paper only did a very simple investigation on related works.[related works-NEG], [SUB-NEG, CMP-NEU]",related works,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEU,,, 5611,"3. Experiments in the paper were only conducted on several small datasets such as MNIST and CIFAR-10.[Experiments-NEG], [SUB-NEG]",Experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5612,"It is necessary to employ the proposed method on benchmark datasets to verify its effectiveness, e.g., ImageNet.[proposed method-NEU, benchmark datasets-NEU], [SUB-NEU, EMP-NEU, CMP-NEU]",proposed method,benchmark datasets,,,,,SUB,EMP,CMP,,,NEU,NEU,,,,,NEU,NEU,NEU,, 5615,"The proposed approach is very interesting.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5616,"However, the method needs to be further clarified and the experiments need to be improved.[method-NEU, experiments-NEU], [EMP-NEG]",method,experiments,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 5617,"Details: 1. The citation format used in the paper is not appropriate, which makes the paper, especially the related work section, very inconvenient to read.[citation-NEG, related work section-NEG], [EMP-NEG]",citation,related work section,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5620,"However, under one-shot learning, won't this make each class still have only one instance for training?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5621,"(2) Moreover, the augmenting features x_i^A (regardless A F, G, or H), are in the same space as the original features x_i. Hence x_i^A is rather an augmenting instance than additional features. What makes feature augmentation better than instance augmentation?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5622,"(3) It is not clear how will the vocabulary-information be exploited?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5623,"In particular, how to ensure the semantic space u to be same as the vocabulary semantic space?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5624,"How to generate the neighborhood in Neigh(hat{u}_i) on page 5?[page-NEU], [EMP-NEU]",page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5625,"3. In the experiments: (1) The authors didn't compare the proposed method with existing state-of-the-art one-shot learning approaches, which makes the results not very convincing.[proposed method-NEU, results-NEG], [CMP-NEG, EMP-NEG]",proposed method,results,,,,,CMP,EMP,,,,NEU,NEG,,,,,NEG,NEG,,, 5626,"(2) The results are reported for different numbers of augmented instances.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5627,"Clarification is needed. [Clarification-NEG], [CLA-NEG]",Clarification,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5630,"Dialogue acts (DAs; or some other semantic relations between utterances) are informative to increase the diversity of response generation.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5631,"It is interesting to see how DAs are used for conversational modeling,;[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5632,"however this paper is difficult for me to follow.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5633,"For example: 1) the caption of section 3.1 is about supervised learning, however the way of describing the model in this section sounds like reinforcement learning.[section-NEG, model-NEG], [EMP-NEG]",section,model,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 5634,"Not sure whether it is necessary to formulate the problem with a RL framework, since the data have everything that the model needs as for a supervised learning.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5635,"2) the formulation in equation 4 seems to be problematic[equation-NEG], [EMP-NEG]",equation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5636,"3) simplify pr(ri|si,ai) as pr(ri|ai,uiu22121,uiu22122) since decoding natural language responses from long conversation history is challenging[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5637,"to my understanding, the only difference between the original and simplified model is the encoder part not the decoder part. Did I miss something?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5638,"4) about section 3.2, again I didn't get whether the model needs RL for training.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5639,"5) We train m(u00b7, u00b7) with the 30 million crawled data through negative sampling. not sure I understand the connection between training $m(cdot, cdot)$ and the entire model.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5640,"6) the experiments are not convincing.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5641,"At least, it should show the generation texts were affected about DAs in a systemic way.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5642,"Only a single example in table 5 is not enough.[table-NEG], [SUB-NEG]",table,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5647,"Positive aspects of the paper: The paper is a very strong empirical paper, with experiments comparable to industrial scale.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5648,"The paper uses the right composition tools like moments accountant to get strong privacy guarantees.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5649,"The main technical ideas in the paper seem to be i) bounding the sensitivity for weighted average queries, and ii) clipping strategies for the gradient parameters, in order to control the norm.[technical ideas-NEU], [EMP-NEU]",technical ideas,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5650,"Both these contributions are important in the effectiveness of the overall algorithm.[contributions-POS, algorithm-POS], [EMP-POS]",contributions,algorithm,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5651,"Concern: The paper seems to be focused on demonstrating the effectiveness of previous approaches to the setting of language models.[paper-NEU, previous approaches-NEU], [CMP-NEU]",paper,previous approaches,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 5652,"I did not find strong algorithmic ideas in the paper.[algorithmic ideas-NEG], [EMP-NEG]",algorithmic ideas,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5653,"I found the paper to be lacking in that respect.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5657,"While the categorization is reasonable[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5658,", there is no proposed new work beyond the existing approaches.[work-NEG], [NOV-NEG]",work,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5659,"No new insight is being discussed.[insight-NEG], [NOV-NEG]",insight,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5660,"Such survey style paper is not appropriate to for ICLR.[paper-NEG], [APR-NEG]",paper,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 5665,"The introduction and related work part are clear with strong motivations to me.[introduction-POS, related work-POS], [CLA-POS, CMP-POS]",introduction,related work,,,,,CLA,CMP,,,,POS,POS,,,,,POS,POS,,, 5666,"But section 4 and 6 need a lot of details. [section-NEU, details-NEG], [SUB-NEG]",section,details,,,,,SUB,,,,,NEU,NEG,,,,,NEG,,,, 5667,"2) My comments are as follows: i) this paper claims that this is a general sentence embedding method, however, from what has been described in section 3, I think this dependency is only defined in HTML format document.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5668,"What if I only have pure text document without these HTML structure information?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5669,"So I suggest the authors do not claim that this method is a general-purpose sentence embedding model.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5670,"ii) The authors do not have any descriptions for Figure 3. Equation 1 is also very confusing.[descriptions-NEG, Figure-NEG, Equation-NEG], [SUB-NEG]",descriptions,Figure,Equation,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 5671,"iii) The experiments are insufficient in terms of details. How is the loss calculated? How is the detection accuracy calculated?[experiments-NEG, details-NEG], [SUB-NEG]",experiments,details,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5676,"(1) Related work. It is presented in a somewhat ahistoric fashion.[Related work-NEG], [PNF-NEG]",Related work,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5677,"In fact, ideas for evolutionary methods applied to RL tasks have been widely studied, and there is an entire research field called ""neuroevolution"" that specifically looks into which mutation and crossover operators work well for neural networks.[ideas-NEU], [CMP-NEU, SUB-NEU]",ideas,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 5678,"I'm listing a small selection of relevant papers below, but I'd encourage the authors to read a bit more broadly, and relate their work to the myriad of related older methods.[work-NEU], [SUB-NEU, CMP-NEU]",work,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 5679,"Ideally, a more reasonable form of parameter-crossover (see references) could be compared to -- the naive one is too much of a straw man in my opinion.[references-NEG], [CMP-NEG]",references,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 5680,"To clarify: I think the proposed method is genuinely novel, but a bit of context would help the reader understand which aspects are and which aspects aren't.[proposed method-POS], [NOV-POS, SUB-NEU, EMP-NEU]",proposed method,,,,,,NOV,SUB,EMP,,,POS,,,,,,POS,NEU,NEU,, 5681,"(2) Ablations. The proposed method has multiple ingredients, and some of these could be beneficial in isolation: for example a population of size 1 with an interleaved distillation phase where only the high-reward trajectories are preserved could be a good algorithm on its own.[proposed method-NEU, algorithm-NEU], [EMP-NEU]",proposed method,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5682,"Or conversely, GPO without high-reward filtering during crossover.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5683,"Or a simpler genetic algorithm that just preserves the kills off the worst members of the population, and replaces them by (mutated) clones of better ones, etc.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5684,"(3) Reproducibility. There are a lot of details missing; the setup is quite complex, but only partially described.[details-NEG, setup-NEG], [SUB-NEG, EMP-NEG]",details,setup,,,,,SUB,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 5687,"? The x-axis on plots, does it include the data required for crossover/Dagger[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5688,"? What are do the shaded regions on plots indicate?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5689,"The loss on pi_S should be made explicit.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5690,"An open-source release would be ideal.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 5691,"Minor points: - naively, the selection algorithm might not scale well with the population size (exhaustively comparing all pairs), maybe discuss that?[algorithm-NEU], [SUB-NEU, EMP-NEU]",algorithm,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 5693,"do as well, and they have a known failure mode of premature convergence because diversity/variance shrinks too fast.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5695,"n- for Figure 2a it would be clearer to normalize such that 1 is the best and 0 is the random policy, instead of 0 being score 0.[Figure-NEU], [PNF-NEU]",Figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5696,"- the language at the end of section 3 is very vague and noncommittal -- maybe just state what you did, and separately give future work suggestions?[section-NEG], [CLA-NEG]",section,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5697,"- there are multiple distinct metrics that could be used on the x-axis of plots, namely: wallclock time, sample complexity, number of updates.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5698,"I suspect that the results will look different when plotted in different ways, and would enjoy some extra plots in the appendix.[results-NEU, appendix-NEU], [EMP-NEU]",results,appendix,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5699,"For example the ordering in Figure 6 would be inverted if plotting as a function of sample complexity?[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5700,"- the A2C results are much worse, presumably because batchsizes are different?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5701,"So I'm not sure how to interpret them: should they have been run for longer?[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 5702,"Maybe they could be relegated to the appendix?[appendix-NEU], [PNF-NEU]",appendix,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5715,"Pros: - results - novelty of idea - crossover visualization, analysis - scalability;[idea-POS, analysis-POS], [NOV-POS, EMP-POS]",idea,analysis,,,,,NOV,EMP,,,,POS,POS,,,,,POS,POS,,, 5716,"Cons: - missing background - missing ablations - missing details;[background-NEG, ablations-NEG, details-NEG], [SUB-NEG, EMP-NEG]",background,ablations,details,,,,SUB,EMP,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 5722,"While it seems clear in general that many of the connections are not needed and can be made sparse (Figures 1 and 2), I found many parts of this paper fairly confusing, both in how it achieves its objectives, as well as much of the notation and method descriptions.[paper-NEG, notation-NEG, method descriptions-NEG], [EMP-NEG, PNF-NEG]",paper,notation,method descriptions,,,,EMP,PNF,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 5724,"Detailed comments and questions: The distribution of connections in windows are first described to correspond to a sort of semi-random spatial downsampling, to get different views distributed over the full image.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5725,"But in the upper layers, the spatial extent can be very small compared to the image size, sometimes even 1x1 depending on the network downsampling structure.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5726,"So are do the windows correspond to spatial windows, and if so, how?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5727,"Or are they different (maybe arbitrary) groupings over the feature maps?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5728,"Also a bit confusing is the notation conv2, conv3, etc.[notation-NEG], [PNF-NEU]",notation,,,,,,PNF,,,,,NEG,,,,,,NEU,,,, 5729,"These names usually indicate the name of a single layer within the network (conv2 for the second convolutional layer or series of layers in the second spatial size after downsampling, for example).[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5730,"But here it seems just to indicate the number of CL layers: 2. And p.1 says that the CL layers are those often referred to as FC layers, not conv (though they may be convolutionally applied with spatial 1x1 kernels).[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 5731,"The heuristic for spacing connections in windows across the spatial extent of an image makes intuitive sense, but I'm not convinced this will work well in all situations, and may even be sub-optimal for the examined datasets.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5732,"For example, to distinguish MNIST 1 vs 7 vs 9, it is most important to see the top-left: whether it is empty, has a horizontal line, or a loop.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5733,"So some regions are more important than others, and the top half may be more important than an equally spaced global view.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5734,"So the description of how to space connections between windows makes some intuitive sense, but I'm unclear on whether other more general connections might be even better, including some that might not be as easily analyzed with the scatter metric described.[description-NEU], [IMP-NEG, EMP-NEG]",description,,,,,,IMP,EMP,,,,NEU,,,,,,NEG,NEG,,, 5735,"Another broader question I have is in the distinction between lower and upper layers (those referred to as feature extracting and classification in this paper).[question-NEU], [EMP-NEU]",question,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5736,"It's not clear to me that there is a crisply defined difference here (though some layers may tend to do more of one or the other function, such as we might interpret). [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5737,"So it seems that expanding the investigation to include all layers, or at least more layers, would be good: It might be that more of the classification function is pushed down to lower layers, as the upper layers are reduced in size.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5738,"How would they respond to similar reductions?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5739,"I'm also unsure why on p.6 MNIST uses 2d windows, while CIFAR uses 3d --- The paper mentions the extra dimension is for features, but MNIST would have a features dimension as well at this stage, I think?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5740,"I'm also unsure whether the windows are over spatial extent only, or over features.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5745,"It is misleading to claim any unsupervised or semi-supervised learning based on the *self-organising part* of, for example, eq. 14, which is merely a result of applying chain rule through the hidden neurons' activation.[claim-NEG], [EMP-NEG]",claim,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5746,"While this model is proposed as an extension of Kohonen's self-organising map (SOM), the paper fails to mention, or compare with, several historically important extension of SOM, which should perhaps at least include the generative topographic mapping (GTM, Bishop et al. 1998), an important probabilistic generalisation of SOM.[model-NEG], [SUB-NEG, CMP-NEG]",model,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 5747,"Finally, the evaluation of the model in comparison with other models is questionable.[evaluation-NEU, model-NEU], [CMP-NEU]",evaluation,model,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 5748,"For example, while the configuration the paper's baseline models are not given, the baseline accuracy of MNIST classification using MLP is 16.2%.[baseline models-NEU, baseline accuracy-NEU], [CMP-NEG]",baseline models,baseline accuracy,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 5749,"This is much worse than the baseline of 12% in LeCun et al. (1998), using simple linear classifier without any preprocessing.[baseline-NEU], [CMP-NEG]",baseline,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 5750,"The 7% accuracy from the proposed model is not in the range of modern deep learning models (The state-of-art accuracy is <0.3%).[accuracy-NEU, proposed model-NEG], [CMP-NEG]",accuracy,proposed model,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 5751,"Similar problem also exist in results from other datasets.[problem-NEU, results-NEG], [EMP-NEG]",problem,results,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 5752,"They are therefore unable to support the paper's claim on robust performance[claim-NEG, performance-NEU], [EMP-NEG]",claim,performance,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5753,"Pros: The question of internal representation is interesting.[question-POS], [EMP-NEU]",question,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 5755,"Comparing learned representations from different models.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5756,"Cons: Not clearly written.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 5757,"Mixing the concept of unsupervised/semi-supervised learning is confusing.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5758,"Model evaluation is questionable.[Model evaluation-NEG], [EMP-NEG]",Model evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5759,"Does not compare existing extensions of SOM.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 5765,"Firstly, the paper does not clearly specify the algorithm it espouses.[algorithm-NEG], [CLA-NEG]",algorithm,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5766,"It states: once the step direction had been determined, we considered that fixed, took the average of gT Hg and gT u2207f over all of the sample points to produce m (u03b1) and then solved for a single u03b1j value You should present pseudo-code for this computation and not leave the reader to determine the detailed order of computation for himself.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5767,"As it stands, it is not only difficult for the reader to infer these details, but also laborious to determine the computational cost per iteration on some network the reader might wish to apply your algorithm to.[algorithm-NEU], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5768,"Since the paper discusses the computational cost of CR only in vague terms, you should at least provide pseudo-code.[pseudocode-NEU], [SUB-NEU]",pseudocode,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5769,"Specifically, consider equation (80) at the very end of the appendix and consider the very last term in that equation.[equation-NEU], [EMP-NEU]",equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5772,"You do not specify how you compute this term or quantities involving this term.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 5773,"In a ReLU network, this term is zero due to local linearity, but since you claim that your algorithm is applicable to general networks, this term needs to be analyzed further.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5774,"While the precise algorithm you suggest is unclear, it's purpose is also unclear.[algorithm-NEG], [IMP-NEG]",algorithm,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 5776,"But it is well-known that Hessian-vector multiplication is relatively cheap in deep networks and this fact has been used for several algorithms, e.g. http://www.iro.umontreal.ca/~lisa/pointeurs/ECML2011_CAE.pdf and https://arxiv.org/pdf/1706.04859.pdf. How is your method for computing g^THg different and why is it superior?[method-NEU], [CMP-NEU]",method,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5778,"See e.g. https://www.usenix.org/system/files/conference/atc17/atc17-zhang.pdf ### Experiments ### The experiments are very weak.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5779,"In a network where weights are initialized to sensible values, your algorithm is shown not to improve upon straight SGD. You only demonstrate superior results when the weights are badly initialized.[algorithm-NEG, results-NEU], [EMP-NEG]",algorithm,results,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5780,"However, there are a very large number of techniques already that avoid the SGD on ReLU network with bad initial weights problem.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 5781,"The most well-known are batch normalization, He initialization and Adam but there are many others. I don't think it's a stretch to consider that problem solved.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5782,"Your algorithm is not shown to address any other problems, but what's worse is that it doesn't even seem to address that problem well.[algorithm-NEG, problem-NEU], [EMP-NEG]",algorithm,problem,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 5783,"While your learning curves are better than straight SGD, I suspect they are well below the respective curves for He init or batchnorm.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5784,"In any case, you would need to compare your algorithm against these state-of-the-art methods if your goal is to overcome bad initializations.[algorithm-NEU], [EMP-NEU, CMP-NEU]",algorithm,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEU,,, 5785,"Also, in appendix A, you state that CR can't even address weights that were initialized to values that are too large.[appendix-NEU], [EMP-NEG]",appendix,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5787,"While I have heard the claim that deep network optimization suffers from intermediate plateaus before, I have not seen a paper studying / demonstrating this behavior.[claim-NEG], [SUB-NEG, CMP-NEG]",claim,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 5788,"I suggest you cite several papers that do this and then replicate the plateau situations that arose in those papers and show that CR overcomes them, instead of resorting to a platenau situation that is essentially artificially induced by intentionally bad hyperparameter choices.[null], [CMP-NEU, SUB-NEU]",null,,,,,,CMP,SUB,,,,,,,,,,NEU,NEU,,, 5789,"I do not understand why your initial learning rate for SGD in figures 2 and 3 (0.02 and 0.01 respectively) differ so much from the initial learning rate under CR.[figures-NEU], [EMP-NEU]",figures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5790,"Aren't you trying to show that CR can find the correct learning rate?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5791,"Wouldn't that suggest that initial learning rate for SGD should be comparable to the early learning rates chosen by CR?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5792,"Wouldn't that suggest you should start SGD with a learning rate of around 2 and 0.35 respectively?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5793,"Since you are annealing the learning rate for SGD, it's going to decline and get close to 0.02 / 0.01 anyway at some point.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5794,"While this may not be as good as CR or indeed batchnorm or Adam, the blue constant curve you are showing does not seem to be a fair representation of what SGD can do.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5795,"You say the minibatch size is 32. For MNIST, this means that 1 epoch is around 1500 iterations.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5796,"That means your plots only show the first epoch of training.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5797,"But MNIST does not converge in 1 epoch.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5798,"You should show the error curve until convergence is reached.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5799,"Same for CIFAR. we are not interested in network performance measures such as accuracy and validation error I strongly suspect your readers may be interested in those things.[accuracy-NEU, validation error-NEU], [EMP-NEU]",accuracy,validation error,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 5800,"You should show validation classification error or at least training classification error in addition to cross-entropy error.[error-NEU], [EMP-NEU]",error,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5803,"The scope of the experiments is limited because only a single network architecture is considered, and it is not a state-of-the art architecture (no convolution, no normalization mechanism, no skip connections).[experiments-NEG], [IMP-NEG]",experiments,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 5804,"You state that you ran experiments on Adam, Adadelta and Adagrad, but you do not show the Adam results.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5806,"This suggests that you omitted the detailed results because they were unfavorable to you.[results-NEG], [SUB-NEG, EMP-NEG]",results,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 5807,"This is, of course, unacceptable! ### (Un)suitability of ReLU for second-order analysis[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5808,"### You claim to use second-order information over the network to set the step size.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5809,"Unfortuantely, ReLU networks do not have second-order information! They are locally linear.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5811,"While this may lead to the Hessian being cheaper to compute, it means it is not representative of the actual behavior of the network.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5812,"In fact, the only second-order information that is brought to bear in your experiments is the second-order information of the error function. [experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5813,"I am not saying that this particular second-order information could not be useful, but you need to make a distinction in your paper between network second-order info and error function second-order info and make explicit that you only use the former in your experiments.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5814,"As far as I know, most second-order papers use either tanh or a smoothed ReLU (such as the smoothed hinge used recently by Koh & Liang (https://arxiv.org/pdf/1703.04730.pdf)) for experiments to overcome the local linearity.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5817,"You have not provided sufficient evidence for this claim.[evidence-NEG], [SUB-NEG]",evidence,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5820,"Also, what if the range of sigma values that need to be considered is larger than the range of alpha values?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5821,"Then setting sigma would take more effort.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 5822,"You do not give precise protocols how you set sigma and how you set alpha for non-CR algorithms.[algorithms-NEG], [EMP-NEG]",algorithms,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5824,"### Minor points ### - Your introduction could benefit from a few more citations[introduction-NEU, citations-NEU], [SUB-NEU, CMP-NEU]",introduction,citations,,,,,SUB,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 5825,"n- The rank of the weighted sum of low rank components (as occurs with mini-batch sampling) is generally larger than the rank of the summed components, however. I don't understand this.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 5826,"Every sum can be viewed as a weighted sum and vice versa.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5827,"- Equation (8) could be motivated a bit better.[Equation-NEU], [EMP-NEU]",Equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5829,"- why the name cubic regularization? shouldn't it be something like quadratic step size tuning?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5830,". . . The reason I am giving a 2 instead of a 1 is because the core idea behind the algorithm given seems to me to have potential, but the execution is sorely lacking.[idea-POS, execution-NEG], [EMP-NEG]",idea,execution,,,,,EMP,,,,,POS,NEG,,,,,NEG,,,, 5831,"A final suggestion: You advertise as one of your algorithms upsides that it uses exact Hessian information.[algorithms-NEU], [EMP-NEU]",algorithms,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5832,"Howwever, since you only care about the scale of the second-order term and not its direction, I suspect exact calculation is far from necessary and you could get away with very cheap approximations, using for example techniques such as mean field analysis (e.g. http://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos.pdf).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5835,"It is surprising that some of these hyperparameters can even be predicted with more than chance accuracy.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5836,"As a simple example, it's possible that there are values of batch size for which the classifiers may become indistinguishable,[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 5837,"yet Table 2 shows that batch size can be predicted with much higher accuracy than chance[Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5838,". It would be good to provide insights into under what conditions and why hyperparameters can be predicted accurately[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5839,". That would make the results much more interesting, and may even turn out to be useful for other problems, such as hyperparameter optimization[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5840,". The selection of the queries for kennen-o is not explained. What is the procedure for selecting the queries? How sensitive is the performance of kennen-o to the choice of the queries?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5841,"One would expect that there is significant sensitivity, in which case it may even make sense to consider learning to select a sequence of queries to maximize accuracy.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5842,"In table 3, it would be useful to show the results for kennen-o as well, because Split-E seems to be the more realistic problem setting and kennen-o seems to be a more realistic attack than kennen-i or kennen-io.[table-NEU], [PNF-NEU]",table,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5843,"In the ImageNet classifier family prediction, how different are the various families from each other?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5844,"Without going through all the references, it is difficult to get a sense of the difficulty of the prediction task for a non-computer-vision reader.[references-NEU], [CMP-NEU]",references,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5849,"The proposed method can reduce more than 90% memory consumption while keeping original model accuracy in both the sentiment analysis task and the machine translation tasks.[proposed method-POS, accuracy-POS], [EMP-POS]",proposed method,accuracy,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5850,"Overall, the paper is well-written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5851,"The motivation is clear, the idea and approaches look suitable and the results clearly follow the motivation.[motivation-POS, idea-POS, approaches-POS, results-POS], [EMP-POS]",motivation,idea,approaches,results,,,EMP,,,,,POS,POS,POS,POS,,,POS,,,, 5852,"I think it is better to clarify in the paper that the proposed method can reduce only the complexity of the input embedding layer.[paper-NEG, proposed method-NEU], [SUB-NEU, CLA-NEG]",paper,proposed method,,,,,SUB,CLA,,,,NEG,NEU,,,,,NEU,NEG,,, 5853,"For example, the model does not guarantee to be able to convert resulting indices to actual words (i.e., there are multiple words that have completely same indices, such as rows 4 and 6 in Table 5), and also there is no trivial method to restore the original indices from the composite vector.[model-NEG, Table-NEG, method-NEG], [SUB-NEG, EMP-NEG]",model,Table,method,,,,SUB,EMP,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 5854,"As a result, the model couldn't be used also as the proxy of the word prediction (softmax) layer, which is another but usually more critical bottleneck of the machine translation task.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5855,"For reader's comprehension, it would like to add results about whole memory consumption of each model as well.[results-NEG], [SUB-NEG]",results,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5856,"Also, although this paper is focused on only the input embeddings, authors should refer some recent papers that tackle to reduce the complexity of the softmax layer.[paper-NEU, recent papers-NEG], [CMP-NEG]",paper,recent papers,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 5859,"First, if we trained the proposed model with starting from zero (e.g., randomly settling each index value), what results are obtained?[results-NEG], [SUB-NEG]",results,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5860,"Second, What kind of information is distributed in each trained basis vector? Are there any common/different things between bases trained by different tasks?[null], [IMP-NEU]]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 5862,"The results are not surprising: * NMT is terrible with noise.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5863,"* But it improves on each noise type when it is trained on that noise type.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5864,"What I like about this paper is that: 1) The experiments are very carefully designed and thorough.[experiments-POS], [EMP-POS, PNF-POS]",experiments,,,,,,EMP,PNF,,,,POS,,,,,,POS,POS,,, 5865,"2) This problem might actually matter.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5866,"Out of curiosity, I ran the example (Table 4) through Google Translate, and the result was gibberish.[Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5867,"But as the paper shows, it's easy to make NMT robust to this kind of noise, and Google (and other NMT providers) could do this tomorrow.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5868,"So this paper could have real-world impact.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5869,"3) Most importantly, it shows that NMT's handling of natural noise does *not* improve when trained with synthetic noise; that is, the character of natural noise is very different.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5870,"So solving the problem of natural noise is not so simple... it's a *real* problem.[problem-NEU], [EMP-NEG]",problem,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 5872,"So these methods could be applied in the real world.[methods-POS], [IMP-POS]",methods,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5873,"(It would be excellent if an outcome of this paper was that commercial MT providers answered it's call to provide more realistic noise by actually providing examples.)[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 5874,"There are no fancy new methods or state-of-the-art numbers in this paper.[methods-NEU], [EMP-NEU]",methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5875,"But it's careful, curiosity-driven empirical research of the type that matters, and it should be in ICLR.[empirical research-POS], [APR-POS, REC-POS]",empirical research,,,,,,APR,REC,,,,POS,,,,,,POS,POS,,, 5879,"Experimental results are given on different multi-task instances.[results-NEU], [SUB-NEU]",results,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5880,"The contributions are interesting and experimental results seem promising.[contributions-POS, results-POS], [EMP-POS]",contributions,results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 5881,"But the paper is difficult to read due to many different ideas and because some algorithms and many important explanations must be found in the Appendix (ten sections in the Appendix and 28 pages).[paper-NEG, Appendix-NEG, pages-NEG], [PNF-NEG]",paper,Appendix,pages,,,,PNF,,,,,NEG,NEG,NEG,,,,NEG,,,, 5882,"Also, most of the paper is devoted to the study of algorithms for which the expected target scores are known.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5884,"In my opinion, the authors should have put the focus on the DU4AC algorithm which get rids of this assumption.[assumption-NEG], [EMP-NEG]",assumption,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5885,"Therefore, I am not convinced that the paper is ready for publication at ICLR'18.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 5886,"* Differences between BA3C and other algorithms are said to be a consequence of the probability distribution over tasks.[algorithms-NEU], [EMP-NEU]",algorithms,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5887,"The gap is so large that I am not convinced on the fairness of the comparison[gap-NEG, comparison-NEG], [CMP-NEG]. For instance, BA3C (Algorithm 2 in Appendix C) does not have the knowledge of the target scores while others heavily rely on this knowledge.[Algorithm-NEG, Appendix-NEG], [CMP-NEG]",gap,comparison,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 5888,"* I do not see how the single output layer is defined.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5889,"* As said in the general comments, in my opinion Section 6 should be developped and more experiments should be done with the DUA4C algorithm.[Section-NEG, experiments-NEG], [SUB-NEG]",Section,experiments,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 5890,"* Section 7.1. It is not clear why degradation does not happen.[Section-NEG], [EMP-NEG]",Section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5891,"It seems to be only an experimental fact.[null], [EMP-NEG]]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5900,"Methods such as the r-GAN score well on the latter by over-representing parts of an object that are likely to be filled.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5901,"Pros: - It is interesting that the latent space models are most successful, including the relatively simple GMM-based model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5902,"Is there a reason that these models have not been as successful in other domains?[models-NEU], [EMP-NEU]",models,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5903,"n - The comparison of the evaluation metrics could be useful for future work on evaluating point cloud GANs.[comparison-NEU], [IMP-NEU, CMP-NEU, SUB-NEU]",comparison,,,,,,IMP,CMP,SUB,,,NEU,,,,,,NEU,NEU,NEU,, 5904,"Due to the simplicity of the method, this paper could be a useful baseline for future work.[method-POS, paper-POS, future work-NEU], [IMP-POS, EMP-POS]",method,paper,future work,,,,IMP,EMP,,,,POS,POS,NEU,,,,POS,POS,,, 5905,"- The part-editing and shape analogies results are interesting, and it would be nice to see these expanded in the main paper.[results-POS], [EMP-POS, SUB-NEU]",results,,,,,,EMP,SUB,,,,POS,,,,,,POS,NEU,,, 5906,"Cons: - How does a model that simply memorizes (and randomly samples) the training set compare to the auto-encoder-based models on the proposed metrics? How does the diversity of these two models differ?[model-NEU], [EMP-NEU, CMP-NEU]",model,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEU,,, 5907,"- The paper simultaneously proposes methods for generating point clouds, and for evaluating them.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5908,"The paper could therefore be improved by expanding the section comparing to prior, voxel-based 3D methods, particularly in terms of the diversity of the outputs.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5909,"Although the performance on automated metrics is encouraging,[performance-POS], [IMP-POS]",performance,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5910,"it is hard to conclude much about under what circumstances one representation or model is better than another.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 5911,"- The technical approach is not particularly novel.[technical approach-NEG], [NOV-NEG]",technical approach,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 5912,"The auto-encoder performs fairly well,[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5913,"but it is just a series of MLP layers that output a Nx3 matrix representing the point cloud, trained to optimize EMD or Chamfer distance.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5914,"The most successful generative models are based on sampling values in the auto-encoder's latent space using simple models (a two-layer MLP or a GMM).[models-NEU], [EMP-NEU]",models,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5915,"- While it is interesting that the latent space models seem to outperform the r-GAN, this may be due to the relatively poor performance of r-GAN than to good performance of the latent space models, and directly training a GAN on point clouds remains an important problem.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 5916,"n - The paper could possibly be clearer by integrating more of the background section into later sections.[paper-NEU, background section-NEU], [PNF-NEU]",paper,background section,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 5917,"Some of the GAN figures could also benefit from having captions.[figures-NEU], [PNF-NEU]",figures,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 5918,"Overall, I think that this paper could serve as a useful baseline for generating point clouds,[paper-NEU], [IMP-POS]",paper,,,,,,IMP,,,,,NEU,,,,,,POS,,,, 5919,"but I am not sure that the contribution is significant enough for acceptance. [contribution-NEG], [IMP-NEG, REC-NEG]",contribution,,,,,,IMP,REC,,,,NEG,,,,,,NEG,NEG,,, 5922,"Overall the paper is a clearly written, well described report of several experiments.[paper-POS, experiments-POS], [CLA-POS, SUB-POS]",paper,experiments,,,,,CLA,SUB,,,,POS,POS,,,,,POS,POS,,, 5923,"It shows convincingly that standard NMT models completely break down on both natural oise and various types of input perturbations.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5925,"The extent of the experiments is quite impressive: three different NMT models are tried, and one is used in extensive experiments with various noise combinations.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5926,"This study clearly addresses an important issue in NMT and will be of interest to many in the NLP community.[study-POS, issue-POS], [IMP-POS]",study,issue,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 5927,"The outcome is not entirely surprising (noise hurts and training and the right kind of noise helps)[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5928,"but the impact may be.[impact-POS], [IMP-POS]",impact,,,,,,IMP,,,,,POS,,,,,,POS,,,, 5931,"Also, the bit of analysis in Sections 6.1 and 7.1 is promising, if maybe not so conclusive yet.[Sections-POS], [EMP-POS]",Sections,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5932,"A few constructive criticisms: The way noise is included in training (sec. 6.2) could be clarified (unless I missed it) e.g. are you generating a fixed oisy training set and adding that to clean data?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5933,"Or introducing noise on-line as part of the training? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5934,"If fixed, what sizes were tried?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5935,"More information on the experimental design would help.[experimental design-NEU], [SUB-NEU]",experimental design,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 5936,"Table 6 is highly suspect: Some numbers seem to have been copy-pasted in the wrong cells, eg. the Rand line for German, or the Swap/Mid/Rand lines for Czech.[Table-NEG], [PNF-NEG]",Table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 5937,"It's highly unlikely that training on noisy Swap data would yield a boost of +18 BLEU points on Czech -- or you have clearly found a magical way to improve performance.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5938,"Although the amount of experiment is already important, it may be interesting to check whether all se2seq models react similarly to training with noise: it could be that some architecture are easier/harder to robustify in this basic way.[experiment-NEU, architecture-NEU], [SUB-NEU, EMP-NEU]",experiment,architecture,,,,,SUB,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 5940,"I agree with authors that this paper is suitable for ICLR, although it will clearly be of interest to ACL/MT-minded folks.[paper-POS], [APR-POS]",paper,,,,,,APR,,,,,POS,,,,,,POS,,,, 5945,"Results look like significant improvements over standard learning setups.[Results-POS], [EMP-POS]",Results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5946,"Detailed Evaluation: The approach presented is simple, clearly presented, and looks effective on benchmarks.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5947,"In terms of originality, it is different from warping training example for the same task and it is a good extension of previously suggested example mixing procedures with a targeted benefit for improved discriminative power.[originality-POS], [NOV-POS, EMP-POS]",originality,,,,,,NOV,EMP,,,,POS,,,,,,POS,POS,,, 5948,"The authors have also provided extensive analysis from the point of views (1) network architecture, (2) mixing method, (3) number of labels / classes in mix, (4) mixing layers -- really well done due-diligence across different model and task parameters.[analysis-POS], [SUB-POS]",analysis,,,,,,SUB,,,,,POS,,,,,,POS,,,, 5949,"Minor Asks: (1) Clarification on how the error rates are defined.[error rates-NEG], [EMP-NEU]",error rates,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 5950,"Especially since the standard learning task could be 0-1 loss and this new BC learning task could be based on distribution divergence (if we're not using argmax as class label).[task-NEU], [EMP-NEU]",task,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5951,"(2) #class_pairs targets as analysis - The number of epochs needed is naturally going to be higher since the BC-DNN has to train to predict mixing ratios between pairs of classes.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5952,"Since pairs of classes could be huge if the total number of classes is large, it'll be nice to see how this scales.[null], [SUB-NEU, EMP-NEU]",null,,,,,,SUB,EMP,,,,,,,,,,NEU,NEU,,, 5953,"I.e. are we talking about a space of 10 total classes or 10000 total classes?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5954,"How does num required epochs get impacted as we increase this class space?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5955,"(3) Clarify how G_1/20 and G_2/20 is important / derived - I assume it's unit conversion from decibels.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 5956,"(4) Please explain why it is important to use the smoothed average of 10 softmax predictions in evaluation...[evaluation-NEU], [EMP-NEU]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5957,"what happens if you just randomly pick one of the 10 crops for prediction?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5962,"Therefore minimizing this upper bound together with a classification loss makes perfect sense and provides a theoretically sound approach to train a robust classifier.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 5963,"This paper provides a gradient of this new upper bound with respect to model parameters so we can apply the usual first order optimization scheme to this joint optimization (loss + upper bound).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5964,"In conclusion, I recommend this paper to be accepted, since it presents a new and feasible direction of a principled approach to train a robust classifier,[paper-POS], [REC-POS, NOV-POS]",paper,,,,,,REC,NOV,,,,POS,,,,,,POS,POS,,, 5965,"and the paper is clearly written and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 5966,"There are possible future directions to be developed.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 5967,"1. Apply the sum-of-squares (SOS) method. The paper's SDP relaxation is the straightforward relaxation of Quadratic Program (QP), and in terms of SOS relaxation hierarchy, it is the first hierarchy.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5968,"One can increase the complexity going beyond the first hierarchy, and this should provides a computationally more challenging but tighter upper bound.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5970,"2. Develop a similar relaxation for deep neural networks. The author already mentioned that they are pursuing this direction.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5971,"While developing the result to the general deep neural networks might be hard, residual networks maybe fine thanks to its structure.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5973,"Overall: I had a really hard time reading this paper because I found the writing to be quite confusing.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 5974,"For this reason I cannot recommend publication as I am not sure how to evaluate the paper's contribution.[contribution-NEU], [REC-NEG]",contribution,,,,,,REC,,,,,NEU,,,,,,NEG,,,, 5981,"The authors show the AESMC works better than importance weighted autoencoders and the double ELBO method works even better in some experiments.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 5982,"The proposed algorithm seems novel,[proposed algorithm-POS], [NOV-POS]",proposed algorithm,,,,,,NOV,,,,,POS,,,,,,POS,,,, 5985,"Is the proposed contribution of this paper just to add the double ELBO or does it also include the AESMC (that is, should this paper subsume the anonymized pre-print mentioned in the intro)? [proposed contribution-NEU], [EMP-NEU]",proposed contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5987,"The introduction/experiments section of the paper is not well motivated.[introduction/experiments section-NEG], [EMP-NEG]",introduction/experiments section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 5988,"What is the problem the authors are trying to solve with AESMC (over existing methods)? [problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5990,"Is it purely to improve likelihood of the fitted model (see my questions on the experiments in the next section)?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5991,"The experiments feel lacking.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 5992,"There is only one experiment comparing the gains from AESMC, ALT to a simpler (?) method of IWAE.[experiment-NEU], [CMP-NEU]",experiment,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 5993,"We see that they do better but the magnitude of the improvement is not obvious (should I be looking at the ELBO scores as the sole judge?[improvement-NEU], [EMP-NEU]",improvement,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 5994,"Does AESMC give a better generative model?).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 5995,"The authors discuss the advantages of SMC and say that is scales better than other methods,[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 5996,"it would be good to show this as an experimental result if indeed the quality of the learned representations is comparable.[experimental result-NEU], [EMP-NEU]",experimental result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6001,"I would like to see a description of the algorithm with the pseudo-code in order to understand the flow of the method.[description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6002,"I got confused at several points because it was not clear what was exactly being estimated with the CNN.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6003,"Having an algorithmic environment would make the description easier. [description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6004,"I know that authors are going to publish the code, but this is not enough at this point of the revision.[null], [REC-NEU]",null,,,,,,REC,,,,,,,,,,,NEU,,,, 6005,"Physical processes in Machine learning have been studied from the perspective of Gaussian processes. Just to mention a couple of references ""Linear latent force models using Gaussian processes"" and Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations[null], [CMP-NEU, EMP-NEU]",null,,,,,,CMP,EMP,,,,,,,,,,NEU,NEU,,, 6006,"In Theorem 2, do you need to care about boundary conditions for your equation? [equation-NEU], [EMP-NEU]",equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6007,"I didn't see any mention to those in the definition for I(x,t). You only mention initial conditions.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6008,"How do you estimate the diffusion parameter D?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6009,"Are you assuming isotropic diffusion? Is that realistic? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6010,"Can you provide more details about how you run the data assimilation model in the experiments? Did you use your own code?[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6015,"The results look convincing for the generation experiments in the paper, both from class-specific (Figure 1) and multi-class generators (Figure 6).[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6016,"The quantitative results also support the visuals.[quantitative results-POS], [EMP-POS]",quantitative results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6017,"One question that arises is whether the point cloud approaches to generation is any more valuable compared to voxel-grid based approaches.[question-NEU], [EMP-NEU]",question,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6019,"show very convincing and high-resolution shape generation results,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6020,"whereas the details seem to be washed out for the point cloud results presented in this paper.[details-NEG], [SUB-NEG]",details,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6021,"I would like to see comparison experiments with voxel based approaches in the next update for the paper.[comparison experiments-NEU], [SUB-NEU, CMP-NEU]",comparison experiments,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 6023,"@article{tatarchenko2017octree, title {Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs}, author {Tatarchenko, Maxim and Dosovitskiy, Alexey and Brox, Thomas}, journal {arXiv preprint arXiv:1703.09438}, year {2017} } In light of the authors' octree updates score is updated.[updates-NEU, score-NEU], [REC-NEU]",updates,score,,,,,REC,,,,,NEU,NEU,,,,,NEU,,,, 6024,"I expect these updates to be reflected in the final version of the paper itself as well. [updates-NEU, paper-NEU], [SUB-NEU]",updates,paper,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 6031,"The organization and presentation of the paper need some improvement.[organization-NEG, presentation-NEG], [PNF-NEU]",organization,presentation,,,,,PNF,,,,,NEG,NEG,,,,,NEU,,,, 6032,"For example, the authors defer many technical details.[technical details-NEG], [EMP-NEG]",technical details,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6033,"To make the paper accessible to the readers, they could provide more intuitions in the first 9 pages.[intuitions-NEU], [CLA-NEU]",intuitions,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 6034,"There are also some typos: For example, the dimension of a is inconsistent.[typos-NEG], [PNF-NEG]",typos,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6035,"In the abstract, a is an m-dimensional vector, and on Page 2, a is a d-dimensional vector.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 6036,"On Page 8, P(B) should be a degree-4 polynomial of B.[Page-NEU], [PNF-NEU]",Page,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 6037,"The paper does not contains any experimental results on real data.[experimental results-NEG], [EMP-NEG]",experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6044,"The proposed time-series MIL problem formulation makes sense. [problem formulation-POS], [EMP-POS]",problem formulation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6045,"The RNN approach is novel to this setting, if somewhat incremental.[approach-POS], [NOV-POS]",approach,,,,,,NOV,,,,,POS,,,,,,POS,,,, 6046,"One very positive aspect is that results are reported exploring the impact of the choice of recurrent neural network architecture, pooling function, and attention mechanism. [results-POS], [IMP-POS, EMP-POS]",results,,,,,,IMP,EMP,,,,POS,,,,,,POS,POS,,, 6047,"Results on a second dataset are reported in the appendix, which greatly increases confidence in the generalizability of the experiments. [Results-NEU, dataset-NEU], [EMP-NEU, PNF-NEU]",Results,dataset,,,,,EMP,PNF,,,,NEU,NEU,,,,,NEU,NEU,,, 6048,"One or more additional datasets would have helped further solidify the results, although I appreciate that medical datasets are not always easy to obtain.[datasets-NEU], [SUB-NEU]",datasets,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6049,"Overall, this is a reasonable paper with no obvious major flaws. [paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6050,"The novelty and impact may be greater on the application side than on the methodology side. [novelty-NEU, methodology-NEU], [NOV-NEU]",novelty,methodology,,,,,NOV,,,,,NEU,NEU,,,,,NEU,,,, 6051,"Minor suggestions: -The term relational multi-instance learning seems to suggest a greater level of generality than the work actually accomplishes. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6052,"The proposed methods can only handle time-series / longitudinal dependencies, not arbitrary relational structure. [proposed methods-NEU], [EMP-NEU]",proposed methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6053,"Moreover, multi-instance learning is typically viewed as an intermediary level of structure in between propositional learning (i.e. the standard supervised learning setting) and fully relational learning, so the relational multi-instance learning terminology sounds a little strange. Cf.: De Raedt, L. (2008). Logical and relational learning.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6054,"Springer Science & Business Media. -Pg 3, a capitalization typo: the Multi-instance learning framework[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6055,"-The equation for the bag classifier on page 4 refers to the threshold-based MI assumption, which should be attributed to the following paper: Weidmann, N., Frank, E. & Pfahringer, B. 2003. A two-level learning method for generalized multi-instance problems. [equation-NEU], [CMP-NEU]",equation,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6058,"- Pg 5, Table 1 vs table 1 - be consistent.[Pg-NEU, Table-NEG], [PNF-NEG]",Pg,Table,,,,,PNF,,,,,NEU,NEG,,,,,NEG,,,, 6059,"-A comparison to other deep learning MIL methods, i.e. those that do not exploit the time-series nature of the problem, would be valuable. [comparison-NEU], [CMP-NEU]",comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6065,"The paper should be checked for grammatical errors, such as e.g. consistent use of (no) hyphen in low-dimensional (or low dimensional).[paper-NEG, grammatical errors-NEG, hyphen-NEG], [PNF-NEG]",paper,grammatical errors,hyphen,,,,PNF,,,,,NEG,NEG,NEG,,,,NEG,,,, 6066,"The abbreviations should be written out on the first use, e.g. MLP, MDS, LLE, etc.[abbreviations-NEG], [PNF-NEG]",abbreviations,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6067,"In the introduction the authors claim that the complexity of parametric techniques does not depend on the number of data points, or that moving to parametric techniques would reduce memory and computational complexities. This is in general not true.[complexity-NEG, techniques-NEG], [EMP-NEG]",complexity,techniques,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6068,"Even if the number of parameters is small, learning them might require complex computations on the whole data set.[parameters-NEG, data set-NEG], [EMP-NEG]",parameters,data set,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6069,"On the other hand, even if the number of parameters is equal to the number of data points, the computations could be trivial, thus resulting in a complexity of O(N).[parameters-NEG, data points-NEG, complexity-NEG], [EMP-NEG]",parameters,data points,complexity,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 6070,"In section 2.1, the authors claim Spectral techniques are non-parametric in nature; this is wrong again.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6071,"E.g. PCA can be formulated as MDS (thus spectral), but can be seen as a parametric mapping which can be used to project new words.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6072,"In section 2.2, it says observation that the double centering....[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6074,"In section 3, the authors propose they technique, which should be faster and require less data than the previous methods, but to support their claim, they do not perform an analysis of computational complexity.[analysis-NEG], [SUB-NEG]",analysis,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6075,"It is not quite clear from the text what the resulting complexity would be.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6076,"With N as number of data points and M as number of landmarks, from the description on page 4 it seems the complexity would be O(N + M^2), but the steps 1 and 2 on page 5 suggest it would be O(N^2 + M^2).[page-NEG], [CLA-NEG]",page,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6077,"Unfortunately, it is also not clear what the complexity of previous techniques, e.g DrLim, is.[previous techniques-NEG], [CLA-NEG]",previous techniques,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6078,"Figure 3, contrary to text, does not provide a visualisation to the sampling mechanism.[Figure-NEG], [SUB-NEG]",Figure,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6079,"In the experiments section, can you provide a citation for ADAM and explain how the parameters were selected?[experiments section-NEG], [SUB-NEG]",experiments section,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6080,"Also, it is not meaningful to measure the quality of a visualisation via the MDS fit.[visualisation-NEG], [EMP-NEG]",visualisation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6082,"In figure 4a, x-axis should be umber of landmarks.[figure-NEU], [PNF-NEU]",figure,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 6083,"It is not clear why the equation 6 holds.[equation-NEG], [EMP-NEG]",equation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6085,"It is also not clear how exactly the equation 7 is evaluated.[equation-NEG], [EMP-NEG]",equation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6086,"It says By varying the number of layers and the number of nodes..., but the nodes and layer are not a part of the equation.[equation-NEG], [EMP-NEG]",equation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6087,"The notation for equation 8 is not explained.[notation-NEG, equation-NEG], [EMP-NEG]",notation,equation,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6088,"Figure 6a shows visualisations by different techniques and is evaluated by looking at it.[Figure-POS], [PNF-POS]",Figure,,,,,,PNF,,,,,POS,,,,,,POS,,,, 6093,"Experiments show that it is suited to create censoring representations for increased anonymisation of data in the context of wearables.[Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6094,"Experiments a are satisfying and show good performance when compared to other methods.[Experiments-POS, performance-POS], [CMP-POS]",Experiments,performance,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 6095,"It could be made clearer how significance is tested given the frequent usage of the term.[significance-NEU], [EMP-NEU]",significance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6096,"The idea is slightly novel, and the framework otherwise state-of-the-art.[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 6097,"The paper is well written, but can use some proof-reading.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6101,"The work defends the point of view that Bayesian inference is the right approach to attack this problem and address difficulties in past implementations.[work-NEU], [EMP-NEU]",work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6102,"The paper is well written,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6103,"the problem is described neatly in conjunction with the past work,[problem-POS], [CMP-POS]",problem,,,,,,CMP,,,,,POS,,,,,,POS,,,, 6104,"and the proposed algorithm is supported by experiments.[proposed algorithm-POS, experiments-POS], [EMP-POS]",proposed algorithm,experiments,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6105,"The work is a useful addition to the community [work-POS], [IMP-POS]",work,,,,,,IMP,,,,,POS,,,,,,POS,,,, 6106,". My main concern focus on the validity of the proposed model in harder tasks such as the Atari experiments in Kirkpatrick et. al. (2017) or the split CIFAR experiments in Zenke et. al. (2017). [proposed model-NEU], [EMP-NEU]",proposed model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6107,"Even though the experiments carried out in the paper are important, they fall short of justifying a major step in the direction of the solution for the continual learning problem.[experiments-NEU], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEU,,,,,,NEG,NEG,,, 6111,"Additionally, this idea not only fill the relatviely lacking of theoretical results for GAN or WGAN, but also provide a new perspective to modify the GAN-type models.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6112,"But this saddle point model reformulation in section 2 is quite standard, with limited theoretical analysis in Theorem 1.[section-POS, Theorem-POS], [EMP-POS]",section,Theorem,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6113,"As follows, the resulting algorithm 1 is also standard primal-dual method for a saddle point problem.[algorithm-POS], [EMP-POS]",algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6114,"Most important I think, the advantage of considering GAN-type model as a saddle point model is that first--order methods can be designed to solve it.[advantage-POS, model-POS], [EMP-POS]",advantage,model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6115,"But the numerical experiments part seems to be a bit weak, because the MINST or CIFAR-10 dataset is not large enough to test the extensibility for large-scale cases.[numerical experiments part-NEG], [SUB-NEG, EMP-NEG]]",numerical experiments part,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 6116,"This paper presents a novel application of machine learning using Graph NN's on ASTs to identify incorrect variable usage and predict variable names in context.[paper-POS, application-POS], [NOV-POS]",paper,application,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 6117,"It is evaluated on a corpus of 29M SLOC, which is a substantial strength of the paper.[corpus-POS, paper-POS], [EMP-POS]",corpus,paper,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6118,"The paper is to be commended for the following aspects: 1) Detailed description of GGNNs and their comparison to LSTMs[description-POS, comparison-POS], [SUB-POS, CLA-POS]",description,comparison,,,,,SUB,CLA,,,,POS,POS,,,,,POS,POS,,, 6119,"2) The inclusion of ablation studies to strengthen the analysis of the proposed technique[ablation studies-POS, analysis-POS], [SUB-POS, EMP-POS]",ablation studies,analysis,,,,,SUB,EMP,,,,POS,POS,,,,,POS,POS,,, 6120,"3) Validation on real-world software data[Validation-POS, data-POS], [EMP-POS]",Validation,data,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6121,"4) The performance of the technique is reasonable enough to actually be used.[technique-POS], [EMP-POS]",technique,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6122,"In reviewing the paper the following questions come to mind: 1) Is the false positive rate too high to be practical?[rate-NEU], [IMP-NEU, EMP-NEU]",rate,,,,,,IMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 6123,"How should this be tuned so developers would want to use the tool?[tool-NEU], [EMP-NEU]",tool,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6124,"2) How does the approach generalize to other languages? (Presumably well, but something to consider for future work.)[approach-NEU, future work-NEU], [IMP-NEU, SUB-NEU]",approach,future work,,,,,IMP,SUB,,,,NEU,NEU,,,,,NEU,NEU,,, 6125,"Despite these questions, though, this paper is a nice addition to deep learning applications on software data and I believe it should be accepted.[paper-POS], [REC-POS]]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 6126,"This paper introduces a method for learning new tasks, without interfering previous tasks, using conceptors.[method-NEU], [NOV-POS]",method,,,,,,NOV,,,,,NEU,,,,,,POS,,,, 6129,"In Section 2 the authors review conceptors.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6130,"This method is algebraic method closely related to spanning sub spaces and SVD.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6131,"The main advantage of using conceptors is their trait of boolean logics: i.e., their ability to be added and multiplied naturally.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6132,"In section 3 the authors elaborate on reviewed ocnceptors method and show how to adapt this algorithm to SGD with back-propagation.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6136,"They show that their method more efficiently suffers on permuted MNIST from less degradation.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6137,"Also, they compared the method to EWC and IMM on disjoint MNIST and again got the best performance.[method-POS, performance-POS], [EMP-POS]",method,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6138,"In general, unlike what the authors suggest, I do not believe this method is how biological agents perform their tasks in real life.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6139,"Nevertheless, the authors show that their method indeed reduce the interference generated by a new task on the old learned tasks. [method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6140,"I think that this work might interest the community since such methods might be part of the tools that practitioners have in order to cope with learning new tasks without destroying the previous ones.[work-POS], [IMP-POS, EMP-POS]",work,,,,,,IMP,EMP,,,,POS,,,,,,POS,POS,,, 6141,"What is missing is the following: I think that without any additional effort, a network can learn a new task in parallel to other task, or some other techniques may be used which are not bound to any algebraic methods.[techniques-NEU], [SUB-NEG]",techniques,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 6142,"Therefore, my only concern is that in this comparison the work bounded to very specific group of methods, and the question of what is the best method for continual learning remained open. [work-NEG, method-NEU], [IMP-NEG, EMP-NEG]",work,method,,,,,IMP,EMP,,,,NEG,NEU,,,,,NEG,NEG,,, 6145,"The affect lexical seems to be a very interesting resource (although I'm not sure what it means to call it 'state of the art'), and definitely support the endeavour to make language models more reflective of complex semantic and pragmatic phenomena such as affect and sentiment.[language models-NEU], [NOV-NEU, EMP-POS]",language models,,,,,,NOV,EMP,,,,NEU,,,,,,NEU,POS,,, 6146,"The justification for why we might want to do this with word embeddings in the manner proposed seems a little unconvincing to me:[justification-NEG], [EMP-NEG]",justification,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6147,"- The statement that 'delighted' and 'disappointed' will have similar contexts is not evident to me at least (other then them both being participle / adjectives).[statement-NEG], [EMP-NEG]",statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6148,"- Affect in language seems to me to be a very contextual phenomenon.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6149,"Only a tiny subset of words have intrinsic and context-free affect.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6150,"Most affect seems to me to come from the use of words in (phrasal, and extra-linguistic) contexts, so a more context-dependent model, in which affect is computed over phrases or sentences, would seem to be more appropriate. Consider words like 'expensive', 'wicked', 'elimination'...[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6151,"The model proposes several applications (sentiment prediction, predicting email tone, word similarity) where the affect-based embeddings yield small improvements.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6152,"However, in different cases, taking different flavours of affect information (V, A or D) produces the best score, so it is not clear what to conclude about what sort of information is most useful.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6153,"It is not surprising to me that an algorithm that uses both WordNet and running text to compute word similarity scores improves over one that uses just running text.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6154,"It also not surprising that adding information about affect improves the ability to predict sentiment and the tone of emails.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6155,"To understand the importance of the proposed algorithm (rather than just the addition of additional data), I would like to see comparison with various different post-processing techniques using WordNet and the affect lexicon (i.e. not just Bollelaga et al.) including some much simpler baselines.[proposed algorithm-NEU, comparison-NEU], [CMP-NEU, SUB-NEU]",proposed algorithm,comparison,,,,,CMP,SUB,,,,NEU,NEU,,,,,NEU,NEU,,, 6156,"For instance, what about averaging WordNet path-based distance metrics and distance in word embedding space (for word similarity), and other ways of applying the affect data to email tone prediction?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6159,"While the idea would be interesting in general,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6160,"unfortunately the experiment section is very much toy example so that it is hard to know the applicability of the proposed approach to any more reasonable scenario.[experiment section-NEG, proposed approach-NEG], [EMP-NEG]",experiment section,proposed approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6161,"Any sort of remotely convincing experiment is left to 'future work'.[experiment-NEG], [SUB-NEG, IMP-NEU]",experiment,,,,,,SUB,IMP,,,,NEG,,,,,,NEG,NEU,,, 6163,"I am quite convinced that any somewhat correctly setup vanilla deep RL algorithm would solve these sort of tasks/ ensemble of tasks almost instantly out of the box.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6164,"Figure 5: Looks to me like the baseline is actually doing much better than the proposed methods?[Figure-NEU, baseline-POS, proposed methods-NEG], [EMP-NEG]",Figure,baseline,proposed methods,,,,EMP,,,,,NEU,POS,NEG,,,,NEG,,,, 6165,"Figure 6: Looking at those 2D PCAs, I am not sure any of those method really abstracts the rendering away.[Figure-NEU, method-NEU], [EMP-NEU]",Figure,method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6166,"Anyway, it would be good to have a quantified metric on this, which is not just eyeballing PCA scatter plots.[null], [SUB-NEU, EMP-NEU]",null,,,,,,SUB,EMP,,,,,,,,,,NEU,NEU,,, 6171,"In this paper, the authors show how training set can be generated automatically satisfying the conditions of Cai et al.'s paper.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6175,"It indeeds shows a dramatic reduction in the number of training samples for the three experiments that have been shown in the paper.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6177,"My feeling from reading the paper is that it is rather incremental over Cai et al.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6178,"I am impressed by the results of the three experiments that have been shown here, specifically, the reduction in the training samples once they have been generated is significant.[results-POS, experiments-POS], [EMP-POS]",results,experiments,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6179,"But these are also the same set of experiments performed by Cai et al.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6180,"Given the original number of traces generated is huge, I do not understand, why this method is at all practical.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6181,"This also explains why the authors have just tested the performance on extremely small sized data. It will not scale.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6182,"So, I am hesitant accepting the paper.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 6183,"I would have been more enthusiastic if the authors had proposed a way to combine the training space exploration as well as removing redundant traces together to make the whole process more scalable and done experiments on reasonably sized data. [experiments-NEU], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEU,,,,,,NEG,NEG,,, 6187,"The major contributions are two folds: firstly, proposing the interesting option elimination problem for multi-step reading comprehension; and secondly, proposing the elimination module where a eliminate gate is used to select different orthogonal factors from the document representations.[contributions-POS], [EMP-POS, IMP-POS]",contributions,,,,,,EMP,IMP,,,,POS,,,,,,POS,POS,,, 6188,"Intuitively, one answer option can be viewed as eliminated if the document representation vector has its factor along the option vector ignored.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6189,"The elimination module is interesting,[module-POS], [EMP-POS]",module,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6190,"but the usefulness of ""elimination"" is not well justified for two reasons.[reasons-NEG], [EMP-NEG]",reasons,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6191,"First, the improvement of the proposed model over the previous state of the art is limited.[improvement-NEG], [EMP-NEG]",improvement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6192,"Second, the model is built upon GAR until the elimination module, then according to Table 1 it seems to indicate that the elimination module does not help significantly (0.4% improvement).[module-NEG, Table-NEG], [EMP-NEG]",module,Table,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6193,"In order to show the usefulness of the elimination module, the model should be exactly built on the GAR with an additional elimination module (i.e. after removing the elimination module, the performance should be similar to GAR but not something significantly worse with a 42.58% accuracy).[model-NEU], [SUB-NEG]",model,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 6194,"Then we can explicitly compare the performance between GAR and the GAR w/ elimination module to tell how much the new module helps.[performance-NEU, module-NEU], [EMP-NEU]",performance,module,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6195,"Other issues: 1) Is there any difference to directly use $x$ and $h^z$ instead of $x^e$ and $x^r$ to compute $tilde{x}_i$?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 6196,"Even though the authors find the orthogonal vectors, they're gated summed together very soon.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6197,"It would be better to show how much ""elimination"" and ""subtraction"" effect the final performance, besides the effect of subtraction gate. 2)[performance-NEU], [SUB-NEU]A figure showing the model architecture and the corresponding QA process will better help the readers understand the proposed model. [figure-NEU, model architecture-NEU, proposed model-NEU], [SUB-NEU]3) $c_i$ in page 5 is not defined.[page-NEG], [PNF-NEG]What's the performance of only using $s_i$ for answer selection or replacing $x^L$ with $s_i$ in score function?[performance-NEU], [EMP-NEU]",performance,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6198,"4) It would be better to have the experiments trained with different $n$ to show how multi-hop effects the final performance, besides the case study in Figure 3[experiments-NEU, performance-NEU, Figure-NEU], [EMP-NEU]",experiments,performance,Figure,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 6199,". Minor issues: 1) In Eqn. (4), it would be better to use a vector as the input of softmax. [Eqn-NEU], [EMP-NEG]",Eqn,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 6200,"2) It would be easier for discussion if the authors could assign numbers to every equation.[discussion-NEU, equation-NEU], [PNF-NEU]]",discussion,equation,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 6207,"The authors have definitely found an interesting untapped source of interesting images.[null], [SUB-POS]",null,,,,,,SUB,,,,,,,,,,,POS,,,, 6208,"Cons: - The authors name their method order network but the method they propose is not really parts of the network but simple preprocessing steps to the input of the network.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6209,"- The paper is incomplete without the appendices.[appendices-NEG], [SUB-NEG]",appendices,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6210,"In fact the paper is referring to specific figures in the appendix in the main text.[appendix-NEU], [PNF-NEU]",appendix,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 6211,"- the authors define color invariance as a being invariant to which specific color an object in an image does have, e.g. whether a car is red or green, but they don't think about color invariance in the broader context - color changes because of lighting, shades, .....[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6213,"This is also problematic, because while the proposed method works for a car that is green or a car that is red, it will fail for a car that is black (or white) - because in both cases the colorfulness is not relevant.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6214,"Note that this is specifically interesting in the context of the task at hand (cars) and many cars being, white, grey (silver), or black.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6215,"- the difference in the results in table 1 could well come from the fact that in all of the invariant methods except for ord the input is a WxHx1 matrix, but for ord and cifar the input is a WxHx3 matrix.[results-NEU, table-NEU], [EMP-NEU]",results,table,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6217,"- the results in the figure 4: it's very unlikely that the differences reported are actually significant.[results-NEG, figure-NEU], [IMP-NEG]",results,figure,,,,,IMP,,,,,NEG,NEU,,,,,NEG,,,, 6218,"It appears that all methods perform approximately the same - and the authors pick a specific line (25k steps) as the relevant one in which the RGB-input space performs best.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6219,"The proposed method does not lead to any relevant improvement.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6220,"Figure 6/7: are very hard to read. I am still not sure what exactly they are trying to say.[Figure-NEG], [PNF-NEG, CLA-NEG]",Figure,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 6221,"Minor comments: - section 1: called for is network -> called for is a network[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6222,"- section 1.1: And and -> And[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6223,"- section 1.1: Appendix -> Appendix C - section 2: Their exists many -> There exist many - section 2: these transformation -> these transformations - section 2: what does the wallpaper groups refer to? - section 2: are a groups -> are groups - section 3.2: reference to a non-existing figure - section 3.2/Training: 2499999 iterations steps? - section 3.2/Training: longer as suggested -> longer than suggested[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6227,"They adapt an existing method for deriving adversarial examples to act under a projection space (effectively a latent-variable model) which is defined through a transformations distribution.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6229,"The paper is clear to follow and the objective employed appears to be sound.[paper-POS, objective-POS], [CLA-POS, EMP-POS]",paper,objective,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 6230,"I like the idea of using 3D generation, and particularly, 3D printing, as a means of generating adversarial examples -- there is definite novelty in that particular exploration for adversarial examples.[idea-POS, novelty-POS], [NOV-POS, EMP-POS]",idea,novelty,,,,,NOV,EMP,,,,POS,POS,,,,,POS,POS,,, 6231,"I did however have some concerns: 1. What precisely is the distribution of transformations used for each experiment?[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6232,"Is it a PCFG?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6233,"Are the different components quantised such that they are discrete rvs, or are there still continuous rvs? (For example, is lighting discretised to particular locations or taken to be (say) a 3D Gaussian?)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6234,"And on a related note, how were the number of sampled transformations chosen?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6235,"Knowing the distribution (and the extent of it's support) can help situate the effectiveness of the number of samples taken to derive the adversarial input.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6236,"2. While choosing the distance metric in transformed space, LAB is used, but for the experimental results, l_2 is measured in RGB space -- showing the RGB distance is perhaps not all that useful given it's not actually being used in the objective.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6237,"I would perhaps suggest showing LAB, maybe in addition to RGB if required.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6238,"3. Quantitative analysis: I would suggest reporting confidence intervals; perhaps just the 1st standard deviation over the accuracies for the true and 'adversarial' labels -- the min and max don't help too much in understanding[Quantitative analysis-NEU], [EMP-NEU, SUB-NEU]",Quantitative analysis,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 6239,"n what effect the monte-carlo approximation of the objective has on things.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6240,"Moreover, the min and max are only reported for the 2D and rendered 3D experiments -- it's missing for the 3D printing experiment.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6241,"4. Experiment power: While the experimental setup seems well thought out and structured, the sample size (i.e, the number of entities considered) seems a bit too small to draw any real conclusions from.[experimental setup-POS], [EMP-NEU]",experimental setup,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 6242,"There are 5 exemplar objects for the 3D rendering experiment and only 2 for the 3D printing one.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 6243,"While I understand that 3D printing is perhaps not all that scalable to be able to rattle off many models, the 3D rendering experiment surely can be extended to include more models?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6244,"Were the turtle and baseball models chosen randomly, or chosen for some particular reason?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6245,"Similar questions for the 5 models in the 3D rendering experiment.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6246,"5. 3D printing experiment transformations: While the 2D and 3D rendering experiments explicitly state that the sampled transformations were random, the 3D printing one says over a variety of viewpoints.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6247,"Were these viewpoints chosen randomly?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6248,"Most of these concerns are potentially quirks in the exposition rather than any issues with the experiments conducted themselves.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6249,"For now, I think the submission is good for a weak accept [submission-NEU], [REC-NEU]",submission,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 6250,"u2013- if the authors address my concerns, and/or correct my potential misunderstanding of the issues, I'd be happy to upgrade my review to an accept.[issues-NEU], [REC-NEU]",issues,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 6257,"The motivation is clear and proposed methods are very sound.[methods-POS], [EMP-POS]",methods,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6258,"Experiments are carried out very carefully.[Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6259,"I have only minor concerns to this paper: - The experiments are designed to achieve comparable BLEU with improved latency.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6260,"I'd like to know whether any BLUE improvement might be possible under similar latency, for instance, by increasing the model size given that inference is already fast enough.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 6261,"- It'd also like to see other language pairs with distorted word alignment, e.g., Chinese/English, to further strengthen this work, though it might have little impact given that attention already capture sort of alignment.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 6262,"- What is the impact of the external word aligner quality?[null], [SUB-NEG, IMP-NEU]",null,,,,,,SUB,IMP,,,,,,,,,,NEG,NEU,,, 6263,"For instance, it would be possible to introduce a noise in the word alignment results or use smaller data to train a model for word aligner.[results-NEU, data-NEU, model-NEU], [EMP-NEU]",results,data,model,,,,EMP,,,,,NEU,NEU,NEU,,,,NEU,,,, 6264,"- The positional attention is rather unclear and it would be better to revise it.[null], [EMP-NEG, CLA-NEG]",null,,,,,,EMP,CLA,,,,,,,,,,NEG,NEG,,, 6266,"Summary: This paper proposes a new approach to tackle the problem of prediction under the shift in design, which consists of the shift in policy (conditional distribution of treatment given features) and the shift in domain (marginal distribution of features).[paper-POS, new approach-POS], [NOV-POS]",paper,new approach,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 6270,"These theoretical results justify the objective function shown in Equation 8.[results-POS, Equation-POS], [EMP-POS]",results,Equation,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6271,"Experiments on the IHDP dataset demonstrates the advantage of the proposed approach compared to its competing alternatives.[Experiments-POS, proposed approach-POS], [EMP-POS]",Experiments,proposed approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6272,"Comments: 1) This paper is well motivated.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6273,"For the task of prediction under the shift in design, shift-invariant representation learning (Shalit 2017) is biased even in the inifite data limit.[task-POS], [EMP-POS]",task,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6274,"On the other hand, although re-weighting methods are unbiased, they suffer from the drawbacks of high variance and unknown optimal weights.[methods-NEG, drawbacks-NEG], [EMP-NEG]",methods,drawbacks,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6275,"The proposed approach aims to overcome these drawbacks.[proposed approach-POS], [EMP-POS]",proposed approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6276,"2) The theoretical results justify the optimization procedures presented in section 5.[theoretical results-POS, section-POS], [EMP-POS]",theoretical results,section,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6277,"Experimental results on the IHDP dataset confirm the advantage of the proposed approach.[Experimental results-POS, proposed approach-POS], [EMP-POS]",Experimental results,proposed approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6279,"In order to make sure the second equality in Equation 2 holds, p_mu (y|x,t) p_pi (y|x,t) should hold as well.[Equation-NEU], [EMP-NEU]",Equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6281,"4) Two drawbacks of previous methods motivate this work, including the bias of representation learning and the high variance of re-weighting.[drawbacks-POS, previous methods-POS], [CMP-POS, EMP-POS]",drawbacks,previous methods,,,,,CMP,EMP,,,,POS,POS,,,,,POS,POS,,, 6282,"According to Lemma 1, the proposed method is unbiased for the optimal weights in the large data limit.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6283,"However, is there any theoretical guarantee or empirical evidence to show the proposed method does not suffer from the drawback of high variance?[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6284,"5) Experiments on synthetic datasets, where both the shift in policy and the shift in domain are simulated and therefore can be controlled, would better demonstrate how the performance of the proposed approach (and thsoe baseline methods) changes as the degree of design shift varies.[Experiments-NEU, proposed approach-NEU], [EMP-NEU]",Experiments,proposed approach,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6285,"6) Besides IHDP, did the authors run experiments on other real-world datasets, such as Jobs, Twins, etc?[experiments-NEU, datasets-NEU], [SUB-NEU]]",experiments,datasets,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 6290,"Paper Strengths: - Despite being simple technique, the proposed pixel deconvolution layer is novel and interesting.[Paper-POS, technique-POS], [NOV-POS, EMP-POS]",Paper,technique,,,,,NOV,EMP,,,,POS,POS,,,,,POS,POS,,, 6295,"The work is valuable, but has room for improvement.[work-NEU], [IMP-NEU]",work,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 6297,"This is not a criticism, however, it is difficult to see the reason for including the structured low-rank experiments in the paper (itAs a reader, I found it difficult to understand the actual procedures used.[experiments-NEU, procedures-NEU], [EMP-NEU]",experiments,procedures,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6298,"For example, what is the difference between the random mask update and the subsampling update (why are there no random mask experiments after figure 1, even though they performed very well)?[difference-NEU, figure-NEU], [EMP-NEU]",difference,figure,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6299,"How is the structured update learned?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6300,"It would be very helpful to include algorithms.[algorithms-NEU], [SUB-NEU]",algorithms,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6301,"It seems like a good strategy is to subsample, perform Hadamard rotation, then quantise.[strategy-NEU], [EMP-NEU]",strategy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6302,"For quantization, it appears that the HD rotation is essential for CIFAR, but less important for the reddit data.[quantization-NEU], [EMP-NEU]",quantization,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6303,"It would be interesting to understand when HD works and why, and perhaps make the paper more focused on this winning strategy, rather than including the low-rank algo.[paper-NEU, strategy-NEU], [EMP-NEU]",paper,strategy,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6304,"If convenient, could the authors comment on a similarly motivated paper under review at iclr 2018: VARIANCE-BASED GRADIENT COMPRESSION FOR EFFICIENT DISTRIBUTED DEEP LEARNING[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6305,"pros: - good use of intuition to guide algorithm choices[algorithm choices-POS], [EMP-POS]",algorithm choices,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6306,"- good compression with little loss of accuracy on best strategy[accuracy-POS], [EMP-POS]",accuracy,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6307,"n- good problem for FA algorithm / well motivated[problem-POS, algorithm-POS], [EMP-POS]",problem,algorithm,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6308,"- cons: - some experiment choices do not appear well motivated / inclusion is not best choice[experiment choices-NEG], [EMP-NEG]",experiment choices,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6309,"- explanations of algos / lack of 'algorithms' adds to confusion[explanations-NEG], [SUB-NEG]",explanations,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6313,"with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), the RDA brings it more on-par with the standard methods.[results-NEG, standard methods-NEU], [EMP-NEG]",results,standard methods,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 6314,"On the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original.[paper-POS], [CLA-POS, NOV-POS]",paper,,,,,,CLA,NOV,,,,POS,,,,,,POS,POS,,, 6315,"On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU[tasks-NEU], [EMP-NEG]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 6316,"- except for MultiCopy where it trains faster,[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6317,"but not to better results and it looks like the difference is between few and very-few training steps anyway.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6318,"The most interesting result is language modeling on Hutter Prize Wikipedia, where RDA very significantly improves upon RWA - but again, only matches a standard GRU or LSTM.[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6319,"So the results are not strongly convincing, and the paper lacks any mention of newer work on attention.[results-NEG], [EMP-NEG, CMP-NEG]",results,,,,,,EMP,CMP,,,,NEG,,,,,,NEG,NEG,,, 6321,"To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks.[evaluation-NEU], [EMP-NEU, REC-NEU]",evaluation,,,,,,EMP,REC,,,,NEU,,,,,,NEU,NEU,,, 6328,"(3) To correct the issue with the first layer in (2) it is suggested to use a random rotation, or simply use continues weights in that layer.[issue-NEU], [EMP-NEU]",issue,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6329,"The first observation is interesting, is explained clearly and convincingly, and is novel to the best of my knowledge.[observation-POS], [NOV-POS, EMP-POS, CLA-POS]",observation,,,,,,NOV,EMP,CLA,,,POS,,,,,,POS,POS,POS,, 6330,"The second observation is much less clear to me.[observation-NEG], [CLA-NEG]",observation,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6331,"Specifically, a.tThe author claim that ""A sufficient condition for delta u to be the same in both cases is L'(x f(u)) ~ L'(x g(u))"".[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6332,"However, I'm not sure if I see why this is true: in a binarized neural net, u also changes, since the previous layers are also binarized.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6333,"b.tRelated to the previous issue, it is not clear to me if in figure 3 and 5, did the authors binarize the activations of that specific layer or all the layers?[issue-NEG, figure-NEG], [CLA-NEG]",issue,figure,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 6334,"If it is the first case, I would be interested to know the latter: It is possible that if all layers are binarized, then the differences between the binarized and non-binarized version become more amplified.[differences-NEU], [SUB-NEU, EMP-NEU]",differences,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 6335,"c.tFor BNNs, where both the weights and activations are binarized, shouldn't we compare weights*activations to (binarized weights)*(binarized activations)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6336,"d.tTo make sure, in figure 4, the permutation of the activations was randomized (independently) for each data sample?[figure-NEU, data sample-NEU], [EMP-NEU]",figure,data sample,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6337,"If not, then C is not proportional the identity matrix, as claimed in section 5.3.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6338,"e.tIt is not completely clear to me that batch-normalization takes care of the scale constant (if so, then why did XNOR-NET needed an additional scale constant?),perhaps this should be further clarified.[null], [SUB-NEG, EMP-NEG]",null,,,,,,SUB,EMP,,,,,,,,,,NEG,NEG,,, 6339,"The third observation seems less useful to me.[observation-NEG], [EMP-NEG]",observation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6340,"Though a random rotation may improve angle preservation in certain cases (as demonstrated in Figure 4), it may hurt classification performance (e.g., distinguishing between 6 and 9 in MNIST).[cases-NEU, Figure-NEU, performance-NEG], [EMP-NEG]",cases,Figure,performance,,,,EMP,,,,,NEU,NEU,NEG,,,,NEG,,,, 6341,"Furthermore, since it uses non-binary operations, it is not clear if this rotation may have some benefits (in terms of resource efficiency) over simply keeping the input layer non-binarized.[efficiency-NEU], [EMP-NEU]",efficiency,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6342,"To summarize, the first part is interesting and nice,[part-POS], [EMP-POS]",part,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6343,"the second part was not clear to me, and the last part does not seem very useful.[part-NEG], [CLA-NEG, EMP-NEG]",part,,,,,,CLA,EMP,,,,NEG,,,,,,NEG,NEG,,, 6346,"Following the author's response and revisions, I have raised my grade.[author's response-POS, grade-POS], [REC-POS]]",author's response,grade,,,,,REC,,,,,POS,POS,,,,,POS,,,, 6351,"The paper is reasonably well-written with clear background and diagrams for the overall architecture.[paper-POS], [CLA-POS, EMP-POS]",paper,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 6352,"The idea is novel and seems to be relatively effective in practice although I do believe that it has a lot of moving parts and introduces a considerable amount of hyperameters (which generally are problematic to tune in causal inference tasks).[idea-POS], [EMP-POS, NOV-POS]",idea,,,,,,EMP,NOV,,,,POS,,,,,,POS,POS,,, 6353,"Other than that, I have the following questions and remarks: - I might have misunderstood the motivation but the GAN objective for the `G` network is a bit weird; why is it a good idea to push the counterfactual outcomes close to the factual outcomes (which is what the GAN objective is aiming for)?[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6354,"Intuitively, I would expect that different treatments should have different outcomes and the distribution of the factual and counterfactual `y` should differ.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6355,"- According to which metric did you perform hyper-parameter optimization on all of the experiments?[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6356,"- From the first toy experiment that highlights the importance of each of the losses it seems that the addition of the supervised loss greatly boosts the performance, compared to just using the GAN objectives.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6357,"What was the relative weighting on those losses in general?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6358,"- From what I understand the `I` network is necessary for out-of-sample predictions where you don't have the treatment assignment, but for within sample prediction you can also use the `G` network[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6359,". What is the performance gap between the `I` and `G` networks on the within-sample set?[performance gap-NEU], [EMP-NEU]",performance gap,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6360,"Furthermore, have you experimented with constructing `G` in a way that can represent `I` by just zeroing the contribution of `y_f` and `t`?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6361,"In this way you can tie the parameters and avoid the two-step process (since `G` and `I` represent similar things).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6362,"- For figure 2 what was the hyper parameters for CFR?[figure-NEU], [EMP-NEU]",figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6363,"CFR includes a specific knob to account for the larger mismatches between treated and control distributions.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6364,"Did you do hyper-parameter tuning for all of the methods in this task?[task-NEU], [EMP-NEU]",task,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6365,"- I would also suggest to not use ""between"" when referring to the KL-divergence as it is not a symmetric quantity.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6366,"Also it should be pointed out that for IHDP the standard evaluation protocol is 1000 replications (rather than 100) so there might be some discrepancy on the scores due to that.[evaluation-NEU], [EMP-NEU]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6369,"In turn, the paper proposes a model that performs attention at all levels of abstraction, which achieves the state of the art in SQuAD.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6371,"Strengths: - The paper is well-written and clear.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6372,"- I really liked Table 1 and Figure 2; it nicely summarizes recent work in the field.[Table-POS, Figure-POS, recent work-POS], [PNF-POS, CMP-POS]",Table,Figure,recent work,,,,PNF,CMP,,,,POS,POS,POS,,,,POS,POS,,, 6373,"- The multi-level attention is novel and indeed seems to work, with convincing ablations.[ablations-POS], [NOV-POS, EMP-POS]",ablations,,,,,,NOV,EMP,,,,POS,,,,,,POS,POS,,, 6374,"- Nice engineering achievement, reaching the top of the leaderboard (in early October).[achievement-POS], [IMP-POS]",achievement,,,,,,IMP,,,,,POS,,,,,,POS,,,, 6375,"Weaknesses: - The paper is long (10 pages) but relatively lacks substances.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6376,"Ideally, I would want to see the visualization of the attention at each level (i.e. how they differ across the levels) and also possibly this model tested on another dataset (e.g. TriviaQA).[model-NEU], [SUB-NEG, EMP-NEU]",model,,,,,,SUB,EMP,,,,NEU,,,,,,NEG,NEU,,, 6378,"after fully connected layer with activation, which seems quite standard.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 6379,"Still useful to know that this works better, so would recommend to tone down a bit regarding the paper's contribution.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6380,"Minor: - Probably figure 4 can be drawn better.[figure-NEG], [PNF-NEG]",figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6381,"Not easy to understand nor concrete.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 6383,"Questions: - Contextualized embedding seems to give a lot of improvement in other works too.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6384,"Could you perform ablation without contextualized embedding (CoVe)?[ablation-NEU], [EMP-NEU]",ablation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6391,"The data points form the nodes of the graph with the edge weights being learned, using ideas similar to message passing algorithms similar to Kearnes et al and Gilmer et al.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6392,"This method generalizes several existing approaches for few-shot learning including Siamese networks, Prototypical networks and Matching networks.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6393,"The authors also conduct experiments on the Omniglot and mini-Imagenet data sets, improving on the state of the art.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6394,"There are a few typos and the presentation of the paper could be improved and polished more.[typos-NEG, presentation-NEG], [PNF-NEG]",typos,presentation,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 6395,"I would also encourage the authors to compare their work to other unrelated approaches such as Attentive Recurrent Comparators of Shyam et al, and the Learning to Remember Rare Events approach of Kaiser et al, both of which achieve comparable performance on Omniglot.[work-NEU, performance-NEU], [SUB-NEU, CMP-NEU]",work,performance,,,,,SUB,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 6396,"I would also be interested in seeing whether the approach of the authors can be used to improve real world translation tasks such as GNMT. [approach-NEU], [EMP-NEU, IMP-NEU]",approach,,,,,,EMP,IMP,,,,NEU,,,,,,NEU,NEU,,, 6398,"The paper proves the weak convergence of the regularised OT problem to Kantorovich / Monge optimal transport problems.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6399,"I like the weak convergence results, but this is just weak convergence.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6400,"It appears to be an overstatement to claim that the approach early-optimally transports one distribution to the other (Cf e.g. Conclusion).[approach-NEG, Conclusion-NEG], [EMP-NEG]",approach,Conclusion,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6401,"There is a penalty to pay for choosing a small epsilon -- it seems to be visible from Figure 2.[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6402,"Also, near-optimality would refer to some parameters being chosen in the best possible way.[parameters-NEU], [EMP-NEU]",parameters,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6403,"I do not see that from the paper.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6404,"However, the weak convergence results are good.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6405,"A better result, hinting on how optimal this can be, would have been to guarantee that the solution to regularised OT is within f(epsilon) from the optimal one, or from within f(epsilon) from the one with a smaller epsilon (more possibilities exist).[result-POS], [EMP-POS]",result,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6406,"This is one of the things experimenters would really care about -- the price to pay for regularisation compared to the unknown unregularized optimum.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6407,"I also like the choice of the two regularisers and wonder whether the authors have tried to make this more general, considering other regularisations ?[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6408,"After all, the L2 one is just an approximation of the entropic one.[approximation-NEU], [EMP-NEU]",approximation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6409,"Typoes: 1- Kanthorovich -> Kantorovich (Intro) 2- Cal C <-> C (eq. 4)[Typoes-NEG], [PNF-NEG]]",Typoes,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6412,"Reward shaping allows the proposed model to outperform simpler baselines, and experiments show the model generalizes to unseen graphs.[proposed model-POS, experiments-POS], [NOV-POS]",proposed model,experiments,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 6413,"While this paper is as far as I can tell novel in how it does what it does,[paper-POS], [NOV-POS]",paper,,,,,,NOV,,,,,POS,,,,,,POS,,,, 6414,"the authors have failed to convey to me why this direction of research is relevant.[research-NEG], [IMP-NEG]",research,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 6417,"An interesting avenue would be if the subtask graphs were instead containing some level of uncertainty, or representing stochasticity, or anything that more traditional methods are unable to deal with efficiently, then I would see a justification for the use of neural networks.[traditional methods-NEU, justification-NEU], [CMP-NEU]",traditional methods,justification,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 6418,"Alternatively, if the subtask graphs were learned instead of given, that would open the door to scaling an general learning.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6419,"Yet, this is not discussed in the paper.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6420,"Another interesting avenue would be to learn the options associated with each task, possibly using the information from the recursive neural networks to help learn these options. The proposed algorithm relies on fairly involved reward shaping, in that it is a very strong signal of supervision on what the next action should be.[proposed algorithm-NEU], [EMP-NEU]",proposed algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6421,"Additionaly, it's not clear why learning seems to completely fail without the pre-trained policy.[learning-NEG], [EMP-NEG]",learning,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6423,"This also makes me question the generality of the approach since the pre-trained policy is rather simple while still providing an apparently strong score.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6424,"In your experiments, you do not compare with any state-of-the-art RL or hierarchical RL algorithm on your domain, and use a new domain which has no previous point of reference.[experiments-NEG], [SUB-NEG, CMP-NEG]",experiments,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 6425,"It it thus hard to properly evaluate your method against other proposed methods.[method-NEG], [CMP-NEU]",method,,,,,,CMP,,,,,NEG,,,,,,NEU,,,, 6426,"What the authors propose is a simple idea,[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6427,"everything is very clearly explained,[null], [CLA-POS]",null,,,,,,CLA,,,,,,,,,,,POS,,,, 6428,"the experiments are somewhat lacking[experiments-NEG], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 6429,"but at least show an improvement over more a naive approach,[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6430,"however, due to its simplicity, I do not think that this paper is relevant for the ICLR conference.[paper-NEG], [REC-NEG]",paper,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 6431,"Comments: - It is weird to use both a discount factor gamma *and* a per-step penalty.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6432,"While not disallowed by theory, doing both is redundant because they enforce the same mechanism.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6433,"- It seems weird that the smoothed logical AND/OR functions do not depend on the number of inputs; that is unless there are always 3 inputs (but it is not explained why; logical functions are usually formalised as functions of 2 inputs) as suggested by Fig 3.[Fig-NEG], [EMP-NEG, SUB-NEG]",Fig,,,,,,EMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 6434,"- It does not seem clear how the whole training is actually performed (beyond the pre-training policy).[training-NEG], [CLA-NEG]",training,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6435,"The part about the actor-critic learning seems to lack many elements (whole architecture training?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6436,"why is the policy a sum of p^{cost} and p^{reward}? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6437,"is there a replay memory? How are the samples gathered?).[samples-NEU], [SUB-NEU]",samples,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6438,"(On the positive side, the appendix provides some interesting details on the tasks generations to understand the experiments.)[appendix-POS, details-POS, experiments-POS], [EMP-POS]",appendix,details,experiments,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 6439,"- The experiments cover different settings with different task difficulties.[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6440,"However, only one type of tasks is used.[tasks-NEG], [SUB-NEG]",tasks,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6441,"It would be good to motivate (in addition to the paragraph in the intro) the cases where using the algorithm described in the paper may be (or not?) the only viable option and/or compare it to other algorithms.[algorithm-NEU], [CMP-NEU]",algorithm,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6442,"Even tough not mandatory, it would also be a clear good addition to also demonstrate more convincing experiments in a different setting.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6443,"- The episode length (time budget) was randomly set for each episode in a range such that 60% u2212 80% of subtasks are executed on average for both training and testing.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6444,"--> this does not seem very precise: under what policy is the 60-80% defined?[policy-NEU], [EMP-NEU]",policy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6445,"Is the time budget different for each new generated environment?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6446,"- why wait until exactly 120 epochs for NTS-RProp before fine-tuning with actor-critic? [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6447,"It seems that much less would be sufficient from figure 4?[figure-NEU], [EMP-NEG]",figure,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 6448,"- In the table 1 caption, it is written same graph structure with training set --> do you mean same graph structure than the training set?[table-NEG], [CLA-NEG, PNF-NEG]]",table,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 6449,"This paper extends the recurrent weight average (RWA, Ostmeyer and Cowell, 2017) in order to overcome the limitation of the original method while maintaining its advantage.[paper-POS, limitation-POS], [EMP-NEU]",paper,limitation,,,,,EMP,,,,,POS,POS,,,,,NEU,,,, 6450,"The motivation of the paper and the approach taken by the authors are sensible, such as adding discounting was applied to introduce forget mechanism to the RWA and manipulating the attention and squash functions.[motivation-POS, approach-POS], [EMP-POS]",motivation,approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6452,"I think the same method can be applied to GRUs or LSTMs[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6453,". Some parameters might be redundant,[parameters-NEG], [EMP-NEG]",parameters,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6454,"however, assuming that this kind of attention mechanism is helpful for learning long-term dependencies and can be computed efficiently, it would be nice to see the outcomes of this combination.[outcomes-NEU], [IMP-POS]",outcomes,,,,,,IMP,,,,,NEU,,,,,,POS,,,, 6455,"Is there any explanation why LSTMs perform so badly compared to GRUs, the RWA and the RDA?[explanation-NEU], [EMP-NEU]",explanation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6456,"Overall, the proposed method seems to be very useful for the RWA.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6463,"Experimental results on the synthetic data shows the proposed approach, called GANITE, is more robust to the existence of selection bias, which is defined as the mismatch between the treated and controlled distributions, compared to its competing alternatives. Experiments on three real world datasets show GANITE achieves the best performance on two datasets, including Twins and Jobs.[Experimental results-POS, performance-POS], [CMP-POS]",Experimental results,performance,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 6464,"It does not perform very well on the IHDP dataset.[dataset-POS], [EMP-POS]",dataset,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6466,"Comments 1) This paper is well written.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6467,"The background and related works are well organized.[background-POS, related works-POS], [PNF-POS]",background,related works,,,,,PNF,,,,,POS,POS,,,,,POS,,,, 6468,"2) To the best of my knowledge, this is the first work that applies GAN to ITE estimation.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 6469,"3) Experiments on the synthetic data and the real-world data demonstrate the advantage of the proposed approach.[Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6470,"4) The authors directly present the formulation without providing sufficient motivations.[motivations-NEU], [EMP-NEU]",motivations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6471,"Could the authors provide more details or intuitions on why GAN would improve the performance of ITE estimation compared to approaches that learn representations to minimize the distance between the distributions of different treatment groups, such as CFR_WASS?[details-NEU], [EMP-NEU]",details,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6472,"5) As is pointed out by the authors, the proposed approach does not perform well when the dataset is small, such as the IHDP data.[proposed approach-NEG, dataset-NEU], [EMP-NEG]",proposed approach,dataset,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 6473,"However, in practice, a lot of real-world datasets might have small sample size, such as the LaLonde dataset.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6474,"Did the authors plan to extend the model to handle those small-sized data sets without completely changing the model.[model-NEU], [IMP-NEU]",model,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 6475,"6) When training the ITE GAN, the objective is to learn the conditional distribution of the potential outcome vector given the feature vector.[objective-NEU], [EMP-NEU]",objective,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6476,"Did the authors try the option of replacing ITE GAN with multi-task regression?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6477,"Will the performance become worse using multi-task regression? [performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6478,"I think this comparison would be a sanity check on the utility of using GAN instead of regression models for ITE estimation.[comparison-NEU], [CMP-NEU]",comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6483,"My main issue with the paper is that it does not do a good job justifying the main advantages of the proposed approach.[paper-NEG, main advantages-NEG], [CLA-NEG]",paper,main advantages,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 6484,"It appears that the iterative method should result in direct improvement with additional samples and inference iterations.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6485,"I am supposing this is at the test time.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6486,"It is not clear exactly when this will be useful.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6487,"I believe an iterative approach is also possible to perform with the standard VAE, e.g., by bootstrapping over the input data and then using the iterative scheme of Rezende et. al.[approach-NEG], [SUB-NEG]",approach,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6489,"The paper should also discuss the additional difficulty that arises when training the proposed model and compare them to training of standard inference networks in VAE.[paper-NEG, proposed model-NEG], [SUB-NEG]",paper,proposed model,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 6490,"In summary, the paper needs to do a better job in justifying the advantages obtained by the proposed method.[paper-NEG, proposed method-NEG], [CLA-NEG]]",paper,proposed method,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 6491,"This paper presents a new covariance function for Gaussian processes (GPs) that is equivalent to a Bayesian deep neural network with a Gaussian prior on the weights and an infinite width.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 6493,"Pros: The result highlights an interesting relationship between deep nets and Gaussian processes.[result-POS], [EMP-POS]",result,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6495,"The paper is clear and very well written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6496,"The analysis of the phases in the hyperparameter space is interesting and insightful.[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6497,"On the other hand, one of the great assets of GPs is the powerful way to tune their hyperparameters via maximisation of the marginal likelihood but the authors have left this for future work![future work-NEU], [IMP-NEU]",future work,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 6498,"Cons: Although the computational complexity of computing the covariance matrix is given, no actual computational times are reported in the article.[article-NEG], [EMP-NEG]",article,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6504,"The paper is generally well written, easy to read and understand, and the results are compelling.[paper-POS, results-POS], [EMP-POS, CLA-POS]",paper,results,,,,,EMP,CLA,,,,POS,POS,,,,,POS,POS,,, 6505,"The proposed GGNN approach outperforms (bi-)LSTMs on both tasks.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6506,"Because the tasks are not widely explored in the literature, it could be difficult to know how crucial exploiting graphically structured information is, so the authors performed several ablation studies to analyze this out.[literature-NEU, ablation studies-POS], [SUB-POS]",literature,ablation studies,,,,,SUB,,,,,NEU,POS,,,,,POS,,,, 6507,"Those results show that as structural information is removed, the GGNN's performance diminishes, as expected.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6508,"As a demonstration of the usefulness of their approach, the authors ran their model on an unnamed open-source project and claimed to find several bugs, at least one of which potentially reduced memory performance.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6509,"Overall the work is important, original, well-executed, and should open new directions for deep learning in program analysis.[work-POS], [NOV-POS, IMP-POS]",work,,,,,,NOV,IMP,,,,POS,,,,,,POS,POS,,, 6510,"I recommend it be accepted.[null], [REC-POS]]",null,,,,,,REC,,,,,,,,,,,POS,,,, 6511,"The quality of this paper is good.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6512,"The presentation is clear [null], [PNF-POS]",null,,,,,,PNF,,,,,,,,,,,POS,,,, 6513,"but I find lack of description of a key topic.[description-NEG], [SUB-NEG]",description,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6514,"The proposed model is not very innovative but works fine for the DQA task.[proposed model-NEG], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6515,"For the TE task, the proposed method does not perform better than the state-of-the-art systems.[proposed method-NEG], [CMP-NEG]",proposed method,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6516,"- As ESIM is one of the key components in the experiments, you should briefly introduce ESIM and explain how you incorporated with your vector representations into ESIM.[experiments-NEG], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 6517,"- The reference of ESIM is not correct.[reference-NEG], [EMP-NEG]",reference,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6518,"- Figure 1 is hard to understand.[Figure-NEG], [PNF-NEG, CLA-NEG]",Figure,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 6519,"What do you indicate with the box and arrow?[box-NEG, arrow-NEG], [PNF-NEG]",box,arrow,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 6520,"Arrows seem to have some different meanings.[Arrows-NEU], [PNF-NEU]",Arrows,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 6521,"- What corpus did you use to pre-train word vectors?[corpus-NEU], [EMP-NEU]",corpus,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6522,"- As the proposed method was successful for the QA task,[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6523,"you need to explain QA data sets and how the questions are solved.[data sets-NEG], [SUB-NEG]",data sets,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6524,"- I also expect performance and error analysis of the task results.[performance-NEG, error analysis-NEG, results-NEG], [SUB-NEG]",performance,error analysis,results,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 6525,"- To claim task-agnostic, you need to try to apply your method to other NLP tasks as well.[method-NEU], [SUB-NEU, EMP-NEU]",method,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 6526,"- Page 3. Sigma is not defined.[Page-NEG], [SUB-NEG]]",Page,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6530,"Quality: The quality of the writing, notation, motivation, and results analysis is low[quality-NEG], [CLA-NEG]",quality,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6532,"The paper motivates that TD is divergent with function approximation, and then goes on to discuss MSPBE methods that have strong convergence results, without addressing why a new approach is needed.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6533,"There are many missing references: ETD, HTD, mirror-prox methods, retrace, ABQ. Q-sigma.[references-NEG], [CMP-NEG, SUB-NEG]",references,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 6534,"This is a very active area of research and the paper needs to justify their approach.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6535,"The paper has straightforward technical errors and naive statements: e.g. the equation for the loss of TD takes the norm of a scalar. [technical errors-NEG], [EMP-NEG]",technical errors,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6536,"The paper claims that it is not well-known that TD with function approximation ignores part of the gradient of the MSVE. There are many others.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6538,"Exp1 seems to indicate that the new method does not converge to the correct solution.[Exp1-NEG], [EMP-NEG]",Exp1,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6539,"The grid world experiment is not conclusive as important details like the number of episodes and how parameters were chosen was not discussed.[experiment-NEG], [EMP-NEG]",experiment,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6540,"Again exp3 provides little information about the experimental setup.[experimental setup-NEG], [EMP-NEG]",experimental setup,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6541,"Clarity: The clarity of the text is fine, though errors make things difficult sometimes.[text-NEU, errors-NEU], [CLA-NEU]",text,errors,,,,,CLA,,,,,NEU,NEU,,,,,NEU,,,, 6542,"For example The Bhatnagar 2009 reference should be Maei.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 6543,"Originality: As mentioned above this is a very active research area, and the paper makes little effort to explain why the multitude of existing algorithms are not suitable. [null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 6544,"Significance: Because of all the things outlined above, the significance is below the bar for this round.[significance-NEG], [IMP-NEG]",significance,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 6549,"A very important type of dialog act is switching topic, often done to ensure that the conversation will continue. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6553,"The empirical evaluation demonstrates the effectiveness of the approach.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6554,"The paper is also well written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6555,"I do not have any suggestion for improvement. This is good work that should be published.[work-POS], [REC-POS]",work,,,,,,REC,,,,,POS,,,,,,POS,,,, 6561,"Quality: The quality is very good.[quality-POS], [EMP-POS]",quality,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6562,"The paper is technically correct and nontrivial.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6563,"All proofs are provided and easy to follow.[proofs-POS], [PNF-POS, SUB-POS, EMP-POS]",proofs,,,,,,PNF,SUB,EMP,,,POS,,,,,,POS,POS,POS,, 6564,"Clarity: The paper is very clear.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6565,"Related work is clearly cited, and the novelty of the paper well explained.[Related work-POS, novelty-POS], [CLA-POS, NOV-POS]",Related work,novelty,,,,,CLA,NOV,,,,POS,POS,,,,,POS,POS,,, 6566,"The technical proofs of the paper are in appendices, making the main text very smooth.[technical proofs-POS, appendices-NEU, main text-POS], [PNF-POS]",technical proofs,appendices,main text,,,,PNF,,,,,POS,NEU,POS,,,,POS,,,, 6567,"Originality: The originality is weak.[originality-NEG], [NOV-NEG]",originality,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 6568,"It extends a series of recent papers correctly cited.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6569,"There is some originality in the proof which differs from recent related papers.[originality-NEU], [EMP-NEU]",originality,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6570,"Significance: The result is not completely surprising,[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6571,"but it is significant given the lack of theory and understanding of deep learning.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 6572,"Although the model is not really relevant for deep networks used in practice, the main result closes a question about characterization of critical points in simplified models if neural network, which is certainly interesting for many people.[model-NEU, main result-NEU], [IMP-NEU]",model,main result,,,,,IMP,,,,,NEU,NEU,,,,,NEU,,,, 6574,"The paper claims to develop a novel method to map natural language queries to SQL.[method-NEU], [NOV-NEU]",method,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 6578,"I am confident that point 1 has been used in several previous works.[previous works-NEU], [NOV-NEG]",previous works,,,,,,NOV,,,,,NEU,,,,,,NEG,,,, 6579,"Although point 2 seems novel, I am not convinced that it is significant enough for ICLR.[null], [APR-NEG, NOV-POS]",null,,,,,,APR,NOV,,,,,,,,,,NEG,POS,,, 6580,"I was also not sure why there is a need to copy items from the input question, since all SQL query nouns will be present in the SQL table in some form.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6581,"What will happen if we restrict the copy mechanism to only copy from SQL table.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6582,"The references need work. There are repeated entries for the same reference (one form arxiv and one from conference).[references-NEG], [PNF-NEG]",references,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6583,"Please cite the conference version if one is available, many arxiv references have conference versions.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 6584,"Rebuttal Response: I am still not confident about the significance of contribution 1, so keeping the score the same.[contribution-NEU], [IMP-NEU, REC-NEU]",contribution,,,,,,IMP,REC,,,,NEU,,,,,,NEU,NEU,,, 6587,"It is a well-written paper,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6588,"however, I am not very convinced by its motivation, the proposed model and the experimental results.[motivation-NEG, proposed model-NEG, experimental results-NEG], [EMP-NEG]",motivation,proposed model,experimental results,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 6589,"First of all, the improvement is rather limited.[improvement-NEG], [SUB-NEG]",improvement,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6590,"It is only 0.4 improvement overall on the RACE dataset;[improvement-NEG], [EMP-NEG]",improvement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6591,"although it outperforms GAR on 7 out of 13 categories;[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6592,"but why is it worse on the other 6 categories?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6593,"I don't see any convincing explanations here.[explanations-NEG], [EMP-NEG]",explanations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6594,"Secondly, in terms of the development of reading comprehension models, I don't see why we need to care about eliminating the irrelevant options.[models-NEG], [EMP-NEG]",models,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6595,"It is hard to generalize to any other RC/QA tasks.[tasks-NEG], [EMP-NEG]",tasks,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6596,"If the point is that the options can add useful information to induce better representations for passage/question, there should be some simple baselines in the middle that this paper should compare to. [baselines-NEU], [CMP-NEG]",baselines,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 6597,"The two baselines SAR and GAR both only induce a representation from paragraph/question, and finally compare to the representation of each option.[baselines-NEG], [SUB-NEG, EMP-NEG]",baselines,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 6598,"Maybe a simple baseline is to merge the question and all the options and see if a better document representation can be defined.[baseline-NEU], [EMP-NEU]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6599,"Some visualizations/motivational examples could be also useful to understand how some options are eliminated and how the document representation has been changed based on that. [examples-NEG], [SUB-NEU]",examples,,,,,,SUB,,,,,NEG,,,,,,NEU,,,, 6602,"Assumptions are comparable to existing results for OVERSIMPLIFED shallow neural networks.[Assumptions-NEU, results-NEU], [EMP-NEU]",Assumptions,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6603,"The main results analyzed: 1) Correspondence of non-degenerate stationary points between empirical risk and the population counterparts.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6604,"2) Uniform convergence of the empirical risk to population risk.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6605,"3) Generalization bound based on stability.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6608,"Thus, the obtained non-degerenate stationary deep linear network should be equivalent to the linear regression model Y XW.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6609,"Should the risk bound only depends on the dimensions of the matrix W?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6610,"2) The comparison with Bartlett & Maass's (BM) work is a bit unfair, because their result holds for polynomial activations while this paper handles linear activations.[result-NEG, paper-NEU], [CMP-NEG]",result,paper,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 6611,"Thus, the authors need to refine BM's result for comparison.[result-NEU], [CMP-NEU, EMP-NEU]",result,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 6617,"The methods reviewed prior work which the authors refer to as ""parallel order"", which assumed that subsequences of the feature hierarchy align across tasks and sharing between tasks occurs only at aligned depths whereas in this work the authors argue that this shouldn't be the case.[prior work-NEU], [CMP-NEU]",prior work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6621,"The authors evaluate their approach on MNIST, UCI, Omniglot and CelebA datasets and compare their approach to ""parallel ordering"" and ""permuted ordering"" and show the performance gain.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6622,"Positives: - The paper is clearly written and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6623,"- The idea is novel and impactful if its evaluated properly and consistently.[idea-POS], [NOV-POS, IMP-POS]",idea,,,,,,NOV,IMP,,,,POS,,,,,,POS,POS,,, 6624,"- The authors did a great job summarizing prior work and motivating their approach.[prior work-POS], [CMP-POS]",prior work,,,,,,CMP,,,,,POS,,,,,,POS,,,, 6625,"Negatives: - Multi-class classification problem is one incarnation of Multi-Task Learning, there are other problems where the tasks are different (classification and localization) or auxiliary (depth detection for navigation).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6626,"CelebA dataset could have been a good platform for testing different tasks, attribute classification and landmark detection.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6627,"u2028 (TODO) I would recommend that the authors test their approach on such setting.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6628,"- Figure 6 is a bit confusing, the authors do not explain why the ""Permuted Order"" performs worse than ""Parallel Order"". [Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6629,"Their assumptions and results as of this section should be consistent that soft order>permuted order>parallel order>single task.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6630,"u2028(TODO) I would suggest that the authors follow up on this result, which would be beneficial for the reader.[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6631,"- Figure 4(a) and 5(b), the results shown on validation loss, how about testing error similar to Figure 6(a)?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6632,"How about results for CelebA dataset, it could be useful to visualize them as was done for MNIST, Omniglot and UCL.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6633,"u2028 (TODO) I would suggest that the authors make the results consistent across all datasets and use the same metric such that its easy to compare.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6634,"Notation and Typos: - Figure 2 is a bit confusing, how come the accuracy decreases with increasing number of training samples? Please clarify.[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6635,"1- If I assume that the Y-Axis is incorrectly labeled and it is Training Error instead, then the permuted order is doing worse than the parallel order.[null], [PNF-NEU, EMP-NEG]",null,,,,,,PNF,EMP,,,,,,,,,,NEU,NEG,,, 6636,"u20282- If I assume that the X-Axis is incorrectly labeled and the numbering is reversed (start from max and ending at 0), then I think it would make sense.[null], [PNF-NEU, EMP-POS]",null,,,,,,PNF,EMP,,,,,,,,,,NEU,POS,,, 6637,"- Figure 4 is very small and not easy to read the text.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6638,"Does single task mean average performance over the tasks?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6639,"- In eq.(3) Choosing sigma_i for a task-specific permutation of the network is a bit confusing, since it could be thought of as a sigmoid function, I suggest using a different symbol.[eq-NEU], [CLA-NEU]",eq,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 6641,"Their approach and idea is very interesting and relevant, and addressing these suggestions will make the paper strong for publication.[approach-POS, idea-POS, paper-NEU], [REC-NEU]",approach,idea,paper,,,,REC,,,,,POS,POS,NEU,,,,NEU,,,, 6643,"- This paper is not well written and incomplete.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6644,"There is no clear explanation of what exactly the authors want to achieve in the paper, what exactly is their approach/contribution, experimental setup, and analysis of their results. [explanation-NEG, approach-NEG, experimental setup-NEG, results-NEG], [CLA-NEG]",explanation,approach,experimental setup,results,,,CLA,,,,,NEG,NEG,NEG,NEG,,,NEG,,,, 6645,"- The paper is hard to read due to many abbreviations, e.g., the last paragraph in page 2.[paper-NEG], [CLA-NEG, PNF-NEG]",paper,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 6646,"- The format is inconsistent. Section 1 is numbered, but not the other sections.[format-NEG], [PNF-NEG]",format,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6647,"- in page 2, what do the numbers mean at the end of each sentence? Probably the figures? [figures-NEU], [PNF-NEU]",figures,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 6648,"- in page 2, in this figure: which figure is this referring to?[page-NEU, figure-NEU], [PNF-NEU]",page,figure,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 6649,"Comments on prior work: p 1: authors write: vanilla backpropagation (VBP) was proposed around 1987 Rumelhart et al. (1985).[prior work-NEU], [CMP-NEU]",prior work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6652,"The first to publish the application of VBP to NNs was Werbos in 1982. Please correct. [null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 6653,"p 1: authors write: Almost at the same time, biologically inspired convolutional networks was also introduced as well using VBP LeCun et al. (1989).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 6654,"Here one must cite the person who really invented this biologically inspired convolutional architecture (but did not apply backprop to it): Fukushima (1979). He is cited later, but in a misleading way. Please correct.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 6656,"Not true. Deep Learning was introduced by Ivakhnenko and Lapa in 1965: the first working method for learning in multilayer perceptrons of arbitrary depth. Please correct.(The term deep learning was introduced to ML in 1986 by Dechter for something else.)[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 6658,"Highway networks were published half a year earlier than resnets, and reached many hundreds of layers before resnets. Please correct.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 6659,"General recommendation: Clear rejection for now.[recommendation-NEG], [REC-NEG]",recommendation,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 6663,"* The authors made a really odd choice of notation, which made the equations hard to follow.[notation-NEG], [PNF-NEG]",notation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6665,"If you talk about outer product structure, show some outer products! * The function f that the authors differentiate is not even defined in the main manuscript![null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6666,"* The low-rank structure they describe only holds for a single sample at a time.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6667,"I don't see how this would be understanding low rank structure of deep networks as the title claims... What is described is basically an implementation trick.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6668,"* Introducing cubic regularization seems interesting.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6669,"However, either some extensive empirical evidence or some some theoretical evidence that this is useful are needed.[evidence-NEU], [EMP-NEU]",evidence,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6670,"The present paper has neither (the empirical evidence shown is very limited).[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6672,"* Strictly speaking Adagrad has not been designed for Deep Learning.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6675,"That sentence seems useless.[sentence-NEG], [CLA-NEG]",sentence,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6676,"* Missing citation: Gradient Descent Efficiently Finds the Cubic-Regularized Non-Convex Newton Step. Yair Carmon, John Duchi. [citation-NEG], [SUB-NEG, CMP-NEG]",citation,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 6680,"Authors also propose a paraphrasing based data augmentation method which helps in improving the performance.[method-POS, performance-POS], [EMP-POS]",method,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6681,"Proposed method performs better than existing models in SQuAD dataset while being much faster in training and inference.[Proposed method-POS], [EMP-POS]",Proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6682,"My Comments: The proposed model is convincing and the paper is well written.[proposed model-POS, paper-POS], [CLA-POS]",proposed model,paper,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 6683,"1. Why don't you report your model performance without data augmentation in Table 1?[performance-NEU, Table-NEU], [PNF-NEU, EMP-NEU]",performance,Table,,,,,PNF,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 6684,"Is it because it does not achieve SOTA?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6685,"The proposed data augmentation is a general one and it can be used to improve the performance of other models as well.[performance-POS], [IMP-POS, EMP-POS]",performance,,,,,,IMP,EMP,,,,POS,,,,,,POS,POS,,, 6686,"So it does not make sense to compare your model + data augmentation against other models without data augmentation.[models-NEU], [CMP-NEU]",models,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6687,"I think it is ok to have some deterioration in the performance as you have a good speedup when compared to other models.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6688,"2. Can you mention your leaderboard test accuracy in the rebuttal? [test accuracy-NEU], [PNF-NEU]",test accuracy,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 6689,"3. The paper can be significantly strengthened by adding at least one more reading comprehension dataset.[paper-NEU], [SUB-NEU]",paper,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6690,"That will show the generality of the proposed architecture.[proposed architecture-NEU], [SUB-NEU]",proposed architecture,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6691,"Given the sufficient time for rebuttal, I am willing to increase my score if authors report results in an additional dataset in the revision.[results-POS], [SUB-NEU, REC-POS]",results,,,,,,SUB,REC,,,,POS,,,,,,NEU,POS,,, 6692,"4. Are you willing to release your code to reproduce the results?[code-NEU, results-NEU], [EMP-NEU]",code,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6693,"n Minor comments: 1. You mention 4X to 9X for inference speedup in abstract and then 4X to 10X speedup in Intro. Please be consistent.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 6694,"n2. In the first contribution bullet point, ""that exclusive built upon"" should be ""that is exclusively built upon"". [null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 6702,"The experimental results on counting are promising.[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6703,"Although counting is important in VQA, the method is solving a very specific problem which cannot be generalized to other representation learning problems.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6704,"Additionally, this method is built on a series of heuristics without sound theoretically justification, and these heuristics cannot be easily adapted to other machine learning applications.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6705,"I thus believe the overall contribution is not sufficient for ICLR [contribution-POS], [APR-NEG]",contribution,,,,,,APR,,,,,POS,,,,,,NEG,,,, 6706,"Pros: 1. Well written paper with clear presentation of the method. [paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 6707,"2. Useful for object counting problem.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6708,"3. Experimental performance is convincing.[Experimental performance-POS], [EMP-POS]",Experimental performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6709,"Cons: 1. The application range of the method is very limited.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6710,"2. The technique is built on a lot of heuristics without theoretical consideration.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6712,"should be able to help with the correct counting the objects with proper construction of the similarity kernel.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6713,"It may also lead to simpler solutions.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6714,"For example, it can be used for deduplication using A (eq 1) as the similarity matrix.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6715,"2. Can the author provide analysis on scalability the proposed method?[analysis-NEU, proposed method-NEU], [SUB-NEU]",analysis,proposed method,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 6716,"When the number of objects is very large, the graph could be huge.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6717,"What are the memory requirements and computational complexity of the proposed method?[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6718,"In the end of section 3, it mentioned that without normalization, the method will not scale to an arbitrary number of objects.[section-NEU, method-NEU], [EMP-NEU]",section,method,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6719,"I think that it will only be a problem for extremely large numbers.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6720,"I wonder whether the proposed method scales.[proposed method-NEU], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 6721,"3. Could the authors provide more insights on why the structured attention (etc) did not significantly improve the result? [result-NEU], [SUB-NEG]",result,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 6722,"Theoritically, it solves the soft attention problems.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6723,"4. The definition of output confidence (section 4.3.1) needs more motivation and theoretical justification.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6728,"This approach can significantly reduce the amount of slack in the variational bound due to a too-weak inference network (above and beyond the limitations imposed by the variational family).[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6729,"This source of error is often ignored in the literature,[literature-NEG], [EMP-NEG]",literature,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6730,"although there are some exceptions that may be worth mentioning: * Hjelm et al. (2015; https://arxiv.org/pdf/1511.06382.pdf) observe it for directed belief networks (admittedly a different model class).[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6733,"They remark that the benefits on binarized MNIST are pretty minimal compared to the benefits on sparse, high-dimensional data like text and recommendations; this suggests that the learning-to-learn approach in this paper may shine more if applied to non-image datasets and larger numbers of latent variables.[approach-NEG], [CMP-NEU, EMP-NEG]",approach,,,,,,CMP,EMP,,,,NEG,,,,,,NEU,NEG,,, 6734,"I think this is good and potentially important work, although I do have some questions/concerns about the results in Table 1 (see below).[work-POS, results-NEG], [EMP-POS, CLA-NEG]",work,results,,,,,EMP,CLA,,,,POS,NEG,,,,,POS,NEG,,, 6735,"Some more specific comments: Figure 2: I think this might be clearer if you unrolled a couple of iterations in (a) and (c).[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6736,"(Dempster et al. 1977) is not the best reference for this section; that paper only considers the case where the E and M steps can be done in closed form on the whole dataset.[reference-NEG], [CMP-NEG]",reference,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6737,"A more relevant reference would be Stochastic Variational Inference by Hoffman et al. (2013), which proposes using iterative optimization of variational parameters in the inner loop of a stochastic optimization algorithm.[reference-NEU], [CMP-NEU]",reference,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6738,"Section 4: The statement p(z) N(z;mu_p,Sigma_p) doesn't quite match the formulation of Rezende&Mohamed (2014).[Section-NEG], [EMP-NEG]",Section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6739,"First, in the case where there is only one layer of latent variables, there is almost never any reason to use anything but a normal(0, I) prior, since the first weight matrix of the decoder can reproduce the effects of any mean or covariance.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6740,"Second, in the case where there are two or more layers, the joint distribution of all z need not be Gaussian (or even unimodal) since the means and variances at layer n can depend nonlinearly on the value of z at layer n+1.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6741,"An added bonus of eliminating the mu_p, Sigma_p: you could get rid of one subscript in mu_q and sigma_q, which would reduce notational clutter.[bonus-NEU], [EMP-NEU]",bonus,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6742,"Why not have mu_{q,t+1} depend on sigma_{q,t} as well as mu_{q,t}?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6743,"Table 1: These results are strange in a few ways: * The gap between the standard and iterative inference network seems very small (0.3 nats at most).[Table-NEG, results-NEG], [EMP-NEG]",Table,results,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6744,"This is much smaller than the gap in Figure 5(a).[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6745,"* The MNIST results are suspiciously good overall, given that it's ultimately a Gaussian approximation and simple fully connected architecture.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6752,"COMMENTS What happens if the agent finds it self in a state that while is close to a state in the similar trajectory the action required to could be completely different.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6753,"Not certain about the claim that standard RL policy learning algorithms make it difficult to assess the difficulty of a problem.[claim-NEU], [EMP-NEU]",claim,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6755,"Actions in RL are by definition stochastic, and this would make it unlikely that a same trajectory can be reproduced exactly.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6759,"On the positive side, the task is a nice example of reasoning about a complex hidden state space, which is an important problem moving forwards in deep learning.[task-POS], [EMP-POS]",task,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6760,"On the negative side, from what I can tell, the authors don't seem to have introduced any fundamentally new architectural choices in their neural network, so the contribution seems fairly specific to mastering StarCraft, but at the same time, the authors don't evaluate how much their defogger actually contributes to being able to win StarCraft games.[contribution-NEG], [SUB-NEG]",contribution,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6762,"Granted, being able to infer hidden states is of course an important problem,[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6763,"but the authors appear to mainly have applied existing techniques to a benchmark that has minimal practical significance outside of being able to win StarCraft competitions, meaning that, at least as the paper is currently framed, the critical evaluation metric would be showing that a defogger helps to win games.[benchmark-NEG], [CMP-NEG]",benchmark,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6764,"Two ways I could image the contribution being improved are either highlighting and generalizing novel insights gleaned from the process of building the neural network that could help people build defoggers for other domains (and spelling out more explicitly what domains the authors expect their insights to generalize to), or doubling down on the StarCraft application specifically and showing that the defogger helps to win games.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6765,"A minimal version of the second modification would be having a bot that has access to a defogger play against a bot that does not have access to one.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6766,"All that said, as a paper on an application of deep learning, the paper appears to be solid, and if the area chairs are looking for that sort of contribution, then the work seems acceptable.[paper-NEU, contribution-POS], [EMP-NEU, REC-POS, APR-NEU]",paper,contribution,,,,,EMP,REC,APR,,,NEU,POS,,,,,NEU,POS,NEU,, 6767,"Minor points: - Is there a benefit to having a model that jointly predicts unit presence and count, rather than having two separate models (e.g., one that feeds into the next)?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6768,"Could predicting presence or absence separately be a way to encourage sparsity, since absence of a unit is already representable as a count of zero?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6769,"The choice to have one model seems especially peculiar given the authors say they couldn't get one set of weights that works for both their classification and regression tasks[model-NEG], [SUB-NEG]",model,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6770,"- Notation: I believe the space U is never described in the main text.[main text-NEG], [SUB-NEG]",main text,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6771,"What components precisely does an element of U have?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6772,"- The authors say they use gameplay from no later than 11 minutes in the game to avoid the difficulties of increasing variance.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6773,"How long is a typical game? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6774,"Is this a substantial fraction of the time of the games studied? [fraction-NEU], [EMP-NEU]",fraction,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6775,"If it is not, then perhaps the defogger would not help so much at winning.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6776,"- The F1 performance increases are somewhat small.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6777,"The L1 performance gains are bigger, but the authors only compare L1 on true positives.[performance-NEG], [CMP-NEG]",performance,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6778,"This means they might have very bad error on false positives.[error rate-NEG], [EMP-NEG]",error rate,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6779,"(The authors state they are favoring the baseline in this comparison, but it would be nice to have those numbers.) - I don't understand when the authors say the deep model has better memory than baselines (which includes a perfect memory baseline)[baselines-NEG], [EMP-NEG]]",baselines,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6782,"The paper has several major shortcomings: * Any paper dealing with MDS and geodesic distances should test the proposed method on the Swiss roll, which has been the most emblematic benchmark since the Isomap paper in 2000.[proposed method-NEG, benchmark-NEG], [EMP-NEG]",proposed method,benchmark,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6783,"Not showing the Swiss roll would possibly let the reader think that the method does not perform well on that example.[example-NEG], [EMP-NEG]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6785,"Please add the Swiss roll example.[example-NEU], [SUB-NEU]",example,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6786,"* Distance preservation appears more and more like a dated DR paradigm.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 6787,"Simple example from 3D to 2D are easily handled but beyond the curse of dimensionality makes things more complicated, in particular due to norm computation.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6788,"Computation accuracy of the geodesic distances in high-dimensional spaces can be poor.[accuracy-NEG], [EMP-NEG]",accuracy,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6789,"This could be discussed and some experiments on very HD data should be reported.[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6791,"There is also an over-emphasis on spectral methods, with the necessity to compute large matrices and to factorize them, probably owing to the popularity of spectral DR metods a decade ago.[methods-NEG], [EMP-NEG]",methods,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6792,"Other methods might be computationally less expensive, like those relying on space-partitioning trees and fast multipole methods (subquadratic complexity).[methods-NEG], [CMP-NEG]",methods,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6793,"Finally, auto-encoders could be mentioned as well; they have the advantage of providing the parametric inverse of the mapping too.[advantage-NEG], [SUB-NEG]",advantage,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6794,"* As a tool for unsupervised learning or exploratory data visualization, DR can hardly benefit from a parametric approach.[tool-NEG, approach-NEG], [EMP-NEG]",tool,approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6795,"The motivation in the end of page 3 seems to be computational only.[page-NEG], [SUB-NEG]",page,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6796,"* Section 3 should be further detailed (step 2 in particular).[Section-NEG, step-NEG], [SUB-NEG]",Section,step,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 6797,"* The experiments are rather limited, with only a few artifcial data sets and hardly any quantitative assessment except for some monitoring of the stress.[experiments-NEG, data sets-NEG], [EMP-NEG]",experiments,data sets,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6798,"The running times are not in favor of the proposed method.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6799,"The data sets sizes are, however, quite limited, with N<10000 for point cloud data and N<2000 for the image manifold.[data sets-NEG], [SUB-NEG]",data sets,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6800,"* The conclusion sounds a bit vague and pompous ('by allowing a limited infusion of axiomatic computation...').[conclusion-NEG], [SUB-NEG]",conclusion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6801,"What is the take-home message of the paper?[paper-NEU], [IMP-NEU]]",paper,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 6804,"The genetic algorithm is a black-box optimization method, however, the proposed method has nothing to do with black-box optimization.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6806,"Mimicking the mutation by a gradient step is very unreasonable.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6807,"The crossover operator is the policy mixing method employed in game context (e.g., Deep Reinforcement Learning from Self-Play in Imperfect-Information Games, https://arxiv.org/abs/1603.01121 ).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6808,"It is straightforward if two policies are to be mixed. Although the mixing method is more reasonable than the genetic crossover operator, it is strange to compare with that operator in a method far away from the genetic algorithm.[method-NEU], [CMP-NEG]",method,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 6809,"It is highly suggested that the method is called as population-based method as a set of networks is maintained, instead of as genetic method.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6810,"Another drawback, perhaps resulted from the genetic algorithm motivation is that the proposed method has not been well explained.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6811,"The only explanation is that this method mimics the genetic algorithm. However, this explanation reveals nothing about why the method could work well -- a random exploration could also waste a lot of samples with a very high probability.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6812,"The baseline methods result in rewards much lower than those in previous experimental papers.[baseline-NEU], [EMP-NEG]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 6813,"It is problemistic that if the baselines have bad parameters.[baselines-NEG], [EMP-NEG]",baselines,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6814,"1. Benchmarking Deep Reinforcement Learning for Continuous Control 2. Deep Reinforcement Learning that Matters[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6818,"The model is interesting, and the results, while preliminary, suggest that the model is capable of making quite interesting generalizations (in particular, it can synthesize images that consist of settings of features that have not been seen before).[model-POS, results-POS], [EMP-POS]",model,results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6819,"However, this paper is mercilessly difficult to read.[paper-NEG], [PNF-NEG, CLA-NEG]",paper,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 6820,"The most serious problems are the extensive discussion of the fully unsupervised variant (rather than the semisupervised variant that is evaluated), poor use of examples when describing the model, nonstandard terminology (""concepts"" and ""context"" are extremely vague terms that are not defined precisely) and discussions to vaguely related work that does not clarify but rather obscures what is going on in the paper.[problems-NEG, examples-NEG, discussions-NEG, paper-NEG], [CMP-NEG, PNF-NEG]",problems,examples,discussions,paper,,,CMP,PNF,,,,NEG,NEG,NEG,NEG,,,NEG,NEG,,, 6821,"For the evaluation, since this paper proposes a technique for learning a posterior recognition model, it would be extremely interesting to see if the model is capable of recognizing images appropriately that combine ""contexts"" that were not observed during training.[paper-NEU, model-NEU], [EMP-NEU]",paper,model,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6822,"The experiments show that the generation component is quite effective,[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 6823,"but this is an obvious missing step.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 6824,"Anyway, some other related work: Lample et al. (2017 NIPS). Fader Networks. I realize this work is more ambitious since it seeks to be a fully generative model including of the contexts/attributes. [related work-NEG], [CMP-NEG]",related work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6828,". In section 3.1, the formula for p(c|x) looks wrong: c_{ijk} are indicator variables.[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6830,"I think it should be c_{ijk} * delta_{ijk} under the summations instead.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 6831,"In the same section, the expression for alpha_{ij} seems to assume that delta_{ijk} dlta_{ij} regardless of k. I.e. there are no production rule scores (transitions).[section-NEU], [PNF-NEG]",section,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 6833,"In the answer selection and NLI experiments, the proposed model does not beat the SOTA, and is only marginally better than unstructured decomposable attention. This is rather disappointing.[proposed model-NEG], [EMP-NEG]",proposed model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6834,"The plots in Fig 2 with the marginals on CKY charts are not very enlightening.[Fig-NEG], [IMP-NEG]",Fig,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 6835,"How do this marginals help solving the NLI task?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6836,"Minor comments: - Sec. 3: Language is inherently tree structured -- this is debatable...[Sec-NEU], [EMP-NEU]",Sec,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6837,"- page 8: (laf, 2008): bad formatted reference[page-NEG], [PNF-NEG]",page,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6841,"I find the problem of defogging quite interesting, even though it is a bit too Starcraft-specific some findings could perhaps be translated to other partially observed environments.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6843,"My impression about the paper is that even though it touches a very interesting problem, it neither is written well nor it contains much of a novelty in terms of algorithms, methods or network architectures.[paper-POS, problem-POS], [EMP-POS, CLA-NEG, NOV-NEG]",paper,problem,,,,,EMP,CLA,NOV,,,POS,POS,,,,,POS,NEG,NEG,, 6844,"Detailed comments: * Authors should at very least cite (Vinyals et al, 2017) and explain why the environment and the dataset released for Starcraft 2 is less suited than the one provided by Lin et al.[dataset-NEU], [SUB-NEG]",dataset,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 6845,"* Problem statement in section 3.1 should certainly be improved.[Problem statement-NEG, section-NEG], [EMP-NEG]",Problem statement,section,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6846,"Authors introduce rather heavy notation which is then used in a confusing way.[notation-NEG], [PNF-NEG]",notation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6847,"For example, what is the top index in $s_t^{3-p}$ supposed to mean?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6848,"The notation is not much used after sec. 3.1, for example, figure 1 does not use it.[sec-NEG, figure-NEG], [PNF-NEG]",sec,figure,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 6849,"* A related issue, is that the definition of metrics is very informal and, again, does not use the already defined notation.[definition-NEG], [PNF-NEG]",definition,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6850,"Including explicit formulas would be very helpful, because, for example, it looks like when reported in table 1 the metrics are spatially averaged, yet I could not find an explicit notion of that.[formulas-NEG, table-NEG], [SUB-NEG, PNF-NEU]",formulas,table,,,,,SUB,PNF,,,,NEG,NEG,,,,,NEG,NEU,,, 6852,"However, to me it seems that even in 15 game steps the uncertainty over the hidden state is quite high and thus any deterministic model has a very limited potential in prediction it.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6853,"At least the concept of stochastic predictions should be discussed * The rule-based baselines are not described in detail.[baselines-NEG], [SUB-NEG]",baselines,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6854,"What does ""using game rules to infer the existence of unit types"" mean?[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 6855,"* Another detail which I found missing is whether authors use just a screen, a mini-map or both.[detail-NEG], [SUB-NEG]",detail,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6856,"In the game of Starcraft, only screen contains information about unit-types, but it's field of view is limited.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6857,"Hence, it's unclear to me whether a model should infer hidden information based on just a single screen + minimap observation (or a history of them) or due to how the dataset is constructed, all units are observed without spatial limitations of the screen.[model-NEG], [EMP-NEG]]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6862,"Pros: -The results on StarCraft are encouraging and present state of the art performance if reproducible.[results-POS, performance-POS], [EMP-POS]",results,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 6863,"Cons: -The experimental evaluation is not very thorough:[experimental evaluation-NEG], [SUB-NEG]",experimental evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6864,"No uncertainty of the mean is stated for any of the results.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6865,"100 evaluation runs is very low.[evaluation-NEG], [EMP-NEG]",evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6866,"It is furthermore not clear whether training was carried out on multiple seeds or whether these are individual runs.[training-NEG], [EMP-NEG]",training,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6867,"-BiCNet and CommNet are both aiming to learn communication protocols which allow decentralized execution.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6868,"Thus they represent weak baselines for a fully centralized method such as MS-MARL.[baselines-NEG], [CMP-NEG]",baselines,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 6869,"The only fully centralized baseline in the paper is GMEZO, however results stated are much lower than what is reported in the original paper (eg. 63% vs 79% for M15v16). [results-NEU], [CMP-NEG]",results,,,,,,CMP,,,,,NEU,,,,,,NEG,,,, 6870,"The paper is missing further centralized baselines.[baselines-NEG], [SUB-NEG]",baselines,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 6871,"-It is unclear to what extends the novelty of the paper (specific architecture choices) are required.[novelty-NEG, paper-NEG], [NOV-NEG]",novelty,paper,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 6872,"For example, the gating mechanism for producing the action logits is rather complex and seems to only help in a subset of settings (if at all).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6874,"What does this mean?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6875,"Figure 1: This figure is very helpful, however the colour for M->S is wrong in the legend.[Figure-NEG], [CLA-NEG, PNF-NEG]",Figure,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 6876,"Table 2: GMEZO win rates are low compared to the original publication.[Table-NEG], [EMP-NEG]",Table,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6877,"What many independent seeds where used for training?[training-NEU], [EMP-NEU]",training,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6878,"What are the confidence intervals? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6879,"How many runs for evaluation?[evaluation-NEU], [EMP-NEU]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6880,"Figure 4: B) What does it mean to feed two vectors into a Tanh?[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6881,"This figure currently very unclear.[figure-NEG], [PNF-NEG]",figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6882,"What was the rational for choosing a vanilla RNN for the slave modules?[rational-NEU], [EMP-NEU]",rational,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6883,"Figure 5: a) What was the rational for stopping training of CommNet after 100 epochs?[Figure-NEU, rational-NEU], [EMP-NEU]",Figure,rational,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6884,"The plot looks like CommNet is still improving.[plot-NEU], [EMP-NEU]",plot,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6885,"c) This plot is disconcerting.[plot-NEG], [EMP-NEG]",plot,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6886,"Training in this plot is very unstable.[Training-NEG], [EMP-NEG]",Training,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6887,"The final performance of the method ('ours') does not match what is stated in 'Table 2'.[performance-NEG, Table-NEG], [EMP-NEG]",performance,Table,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6888,"I wonder if this is due to the very small batch size used (a small batch size of 4 ). [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6891,"It has a lemma which claims that the minimax and the maximin solutions provide the best worst-case defense and attack models, respectively, without proof, although that statement is supported experimentally.[lemma-NEU], [EMP-NEU]",lemma,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6892,"+ Prior work seem adequately cited and compared to, but I am not really knowledgeable in the adversarial attacks subdomain.[Prior work-POS], [CMP-POS]",Prior work,,,,,,CMP,,,,,POS,,,,,,POS,,,, 6893,"- The experiments are on small/limited datasets (MNIST and CIFAR-10). Because of this, confidence intervals (over different initializations, for instance) would be a nice addition to Table 5.[experiments-NEU, datasets-NEG], [SUB-NEG, EMP-NEG]",experiments,datasets,,,,,SUB,EMP,,,,NEU,NEG,,,,,NEG,NEG,,, 6894,"- There is no exact (alternating optimization could be considered one) evaluation of the impact of the sensitivy loss vs. the minimax/maximin algorithm.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6895,"- The paper is hard to follow at times (and probably that dealing with the point above would help in this regard), e.g. Lemma 1 and experimental analysis.[paper-NEG], [CLA-NEG]",paper,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 6896,"- It is unclear (from Figures 3 and 7) that alternative optimization and minimax converged fully, and/or that the sets of hyperparameters were optimal.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6897,"+ This paper presents a game formulation of learning-based attacks and defense in the context of adversarial examples for neural networks, and empirical findings support its claims.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6898,"Nitpicks: the gradient descent -> gradient descent or the gradient descent algorithm seeming -> seemingly arbitrary flexible -> arbitrarily flexible can name gradient descent that maximizes: gradient ascent. The mini- max or the maximin solution is defined -> are defined is the follow -> is the follower [null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 6903,"The paper is generally pretty well written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6905,"Where the Hazan paper concerns itself with the system id portion of the control problem, this paper seems to be the controls extension of that same approach.[paper-NEU], [IMP-NEU]",paper,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 6908,"The most novel contribution of this ICLR paper seems to be equation (4), where the authors set up an optimization problem to solve for optimal inputs; much of that optimization set-up relies on Hazan's work, though.[contribution-POS, equation-POS], [NOV-POS]",contribution,equation,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 6909,"However, the authors do prove their work, which increases the novelty.[work-POS], [NOV-POS]",work,,,,,,NOV,,,,,POS,,,,,,POS,,,, 6910,"The novelty would be improved with clearer differentiation from the Hazan 2017 paper.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 6911,"My biggest concerns that dampen my enthusiasm are some assumptions that may not be realistic in most controls settings: - First, the most concerning assumption is that of a symmetric LDS matrix A (and Lyapunov stability).[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6913,"From a couple of quick searches it seems like there are a few physics / chemistry applications where a symmetric A makes sense, but the authors don't do a good enough job setting up the context here to make the results compelling.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6914,"Without that context it's hard to tell how broadly useful these results are.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6915,"In Hazan's paper they mention that the system id portion, at least, seems to work with non-symmetric, and even non-linear dynamical systems (bottom of page 3, Hazan 2017).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 6916,"Is there any way to extend the current results to non-symmetric systems?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6917,"- Second, it appears that the proposed methods may rely on running the dynamical system several times before attempting to control it.[proposed methods-NEU], [EMP-NEU]",proposed methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6919,"If so this seems like it may be a significant constraint that would shrink the application space and impact even further. [constraint-NEU], [EMP-NEU]",constraint,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6924,"The method is interesting, in particular if benefit #2 holds experimentally.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6925,"Unfortunately, there are too many gaps in the experimental evaluation of this paper to warrant this claim right now.[experimental evaluation-NEG], [EMP-NEG]",experimental evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6926,"Major: 1)tArguably, point 1 is not a particularly interesting setting.[point-NEG], [EMP-NEG]",point,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6932,"By the way, what is the sample size of the current set of synthetic experiments?[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6934,"This is misleading, because the search of the exponential space of interactions happens during training by moving around in the latent space identified by the intermediate layers.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6935,"It could perhaps be rephrased as ""efficiently"".[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6936,"3)tIt's not clear from the text whether ANOVA and HierLasso are only looking for second order interactions.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 6937,"If so, why not include a lasso with n-order interactions as a baseline?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 6938,"4)tWhy aren't the baselines evaluated on the real datasets and heatmaps similar to figure 5 are produced?[baselines-NEU, datasets-NEU], [EMP-NEU]",baselines,datasets,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6939,"5)tIs it possible to include the ROC curves coprresponding to table 2?[table-NEU], [EMP-NEU]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6940,"Minor: 1)tHave the authors thought about statistical testing in this framework?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6941,"The proposed method only gives a ranking of possible interactions, but does not give p-values or similar (e.g. FDRs). 2)t12 pages of text.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6942,"Text is often repetitive and can be shortened without loss of understanding or reproducibility. [Text-NEG], [PNF-NEG]",Text,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 6945,"The advantage is the possibility of more parallel decoding which can result in a significant speed-up (up to a factor of 16 in the experiments described).[advantage-POS], [EMP-POS]",advantage,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6946,"The disadvantage is that it is more complicated than a standard beam search as auto-regressive teacher models are needed for training and the results do not reach (yet) the same BLEU scores as standard beam search.[model-NEG, results-NEG], [EMP-NEG]",model,results,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 6948,"It would have been good to see a speed-accuracy curve which plots decoding speed for different sized models versus the achieved BLUE score on one of the standard benchmarks (like WMT14 en-fr or en-de) to understand better the pros and cons of the proposed approach and to be able to compare models at the same speed or the same BLEU scores.[curve-NEG, models-NEU, proposed approach-NEU], [SUB-NEG]",curve,models,proposed approach,,,,SUB,,,,,NEG,NEU,NEU,,,,NEG,,,, 6949,"Table 1 gives a hint of that but it is not clear whether much smaller models with standard beam search are possibly as good and fast as NAT -- losing 2-5 BLEU points on WMT14 is significant.[Table-NEG, models-NEG], [CLA-NEG]",Table,models,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 6950,"While the Ro->En results are goodG this particular language pair has not been used much by others; it would have been more interesting to stay with a single well-used language pair and benchmark and analyze why WMT14 en->de and de->en are not improving more.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6951,"Finally it would have been good to address total computation in the comparison as well -- it seems while total decoding time is smaller total computation for NAT + NPD is actually higher depending on the choice of s.[null], [EMP-NEG]]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6955,"They next observe that since FGSM is given by a simple perturbation of the sample point by the gradient of the loss, that the fixed point of the above dynamics can be optimized for directly using gradient descent.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6956,"They call this approach Sens FGSM, and evaluate it empirically against the various iterates of the above approach.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6957,"They then generalize this approach to an arbitrary attacker strategy given by some parameter vector (e.g. a neural net for generating adversarial samples).[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6958,"In this case, the attacker and defender are playing a minimax game, and the authors propose finding the minimax (or maximin) parameters using an algorithm which alternates between maximization and minimization gradient steps.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6959,"They conclude with empirical observations about the performance of this algorithm.[observations-NEU, algorithm-NEU], [EMP-NEU]",observations,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6960,"The paper is well-written and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 6961,"However, I found the empirical results to be a little underwhelming.[empirical results-NEG], [EMP-NEG]",empirical results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6962,"Sens-FGSM outperforms the adversarial training defenses tuned for the ""wrong"" iteration, but it does not appear to perform particularly well with error rates well above 20%.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6964,"Furthermore, what is the significance of FGSM-curr (FGSM-81) for Sens-FGSM? [significance-NEU], [IMP-NEU]",significance,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 6965,"It is my understanding that Sens-FGSM is not trained to a particular iteration of the ""cat-and-mouse"" game.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6966,"Why, then, does Sens-FGSM provide a consistently better defense against FGSM-81?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6967,"With regards to the second part of the paper, using gradient methods to solve a minimax problem is not especially novel (i.e. Goodfellow et al.),;[methods-NEG], [NOV-NEG]",methods,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 6968,"thus I would liked to see more thorough experiments here as well.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 6969,"For example, it's unlikely that the defender would ever know the attack network utilized by an attacker.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 6970,"How robust is the defense against samples generated by a different attack network?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6971,"The authors seem to address this in section 5 by stating that the minimax solution is not meaningful for other network classes. However, this is a bit unsatisfying.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6972,"Any defense can be *evaluated* against samples generated by any attacker strategy.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6973,"Is it the case that the defenses fall flat against samples generated by different architectures? [architectures-NEU], [EMP-NEU]",architectures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6974,"Minor Comments: Section 3.1, First Line. ""f(ul(g(x),y))"" appears to be a mistake.[Section-NEG, Line-NEG], [CLA-NEG, PNF-NEG]",Section,Line,,,,,CLA,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 6978,"The discussion on the deficiencies of the naive LP approach is mostly well done.[discussion-POS], [EMP-POS]",discussion,,,,,,EMP,,,,,POS,,,,,,POS,,,, 6980,"They perform an empirical study in the Inverted Double Pendulum domain to conclude that their extended algorithm outperforms the naive linear programming approach without the improvements.[empirical study-NEU, algorithm-NEU], [EMP-NEU]",empirical study,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 6981,"Lastly, there are empirical experiments done to conclude the superior performance of Dual-AC in contrast to other actor-critic algorithms.[empirical experiments-NEU], [CMP-NEU]",empirical experiments,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 6982,"Overall, this paper could be a significant algorithmic contribution, with the caveat for some clarifications on the theory and experiments.[algorithmic contribution-POS, clarifications-NEU], [IMP-POS, EMP-NEU]",algorithmic contribution,clarifications,,,,,IMP,EMP,,,,POS,NEU,,,,,POS,NEU,,, 6983,"Given these clarifications in an author response, I would be willing to increase the score.[score-NEU], [REC-NEU]",score,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 6984,"For the theory, there are a few steps that need clarification and further clarification on novelty.[novelty-NEU], [NOV-NEU]",novelty,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 6985,"For novelty, it is unclear if Theorem 2 and Theorem 3 are both being stated as novel results.[Theorem-NEU, results-NEU], [NOV-NEU]",Theorem,results,,,,,NOV,,,,,NEU,NEU,,,,,NEU,,,, 6986,"It looks like Theorem 2 has already been shown in Randomized Linear Programming Solves the Discounted Markov Decision Problem in Nearly-Linear Running Time"".[Theorem-NEU], [NOV-NEU]",Theorem,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 6988,"However, as we discussed in Section 3, their algorithm is restricted to tabular parametrization"".[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6989,"Is you Theorem 2 somehow an extension? Is Theorem 3 completely new?[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6990,"This is particularly called into question due to the lack of assumptions about the function class for value functions.[assumptions-NEU], [EMP-NEU]",assumptions,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6991,"It seems like the value function is required to be able to represent the true value function, which can be almost as restrictive as requiring tabular parameterizations (which can represent the true value function).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6993,"Further, eta_v must be chosen to ensure that it does not affect (constrain) the optimal solution, which implies it might need to be very small.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 6995,"There is also one step in the theorem that I cannot verify.[theorem-NEU], [EMP-NEU]",theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6996,"On Page 18, how is the squared removed for difference between U and Upi?[Page-NEU], [EMP-NEU]",Page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 6997,"The transition from the second line of the proof to the third line is not clear.[proof-NEG], [EMP-NEG]",proof,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 6998,"It would also be good to more clearly state on page 14 how you get the first inequality, for || V^* ||_{2,mu}^2. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7000,"1. It would have been better to also show the performance graphs with and without the improvements for multiple domains.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 7001,"2. The central contribution is extending the single step LP to a multi-step formulation.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7002,"It would be beneficial to empirically demonstrate how increasing k (the multi-step parameter) affects the performance gains.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7003,"3. Increasing k also comes at a computational cost.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7004,"I would like to see some discussions on this and how long dual-AC takes to converge in comparison to the other algorithms tested (PPO and TRPO).[discussions-NEU], [CMP-NEU, SUB-NEU]",discussions,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 7005,"4. The authors concluded the presence of local convexity based on hessian inspection due to the use of path regularization.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7006,"It was also mentioned that increasing the regularization parameter size increases the convergence rate.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7007,"Empirically, how does changing the regularization parameter affect the performance in terms of reward maximization?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7008,"In the experimental section of the appendix, it is mentioned that multiple regularization settings were tried but their performance is not mentioned.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7009,"Also, for the regularization parameters that were tried, based on hessian inspection, did they all result in local convexity?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7010,"A bit more discussion on these choices would be helpful.[discussion-NEU], [EMP-NEU, SUB-NEU]",discussion,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 7011,"Minor comments: 1. Page 2: In equation 5, there should not be a 'ds' in the dual variable constraint.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 7015,"Since most theory around divergence minimization is based on the unmodified loss function for generator G, the experiments carried out in the submission might yield somewhat surprising results compared the theory.[experiments-NEU, results-NEU], [EMP-NEU]",experiments,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7016,"If I may summarize the key takeaways from Sections 5.4 and 6, they are: - GAN training remains difficult and good results are not guaranteed (2nd bullet point)[Sections-NEU, results-NEU], [EMP-NEU]",Sections,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7017,"; - Gradient penalties work in all settings, but why is not completely clear;[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7020,"The submission's (counter-)claims are served by example (cf. Figure 2, or Figure 3 description, last sentence), and mostly relate to statements made in the WGAN paper (Arjovsky et al., 2017).[statements-NEU], [CMP-NEU]",statements,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7021,"As a purely empirical study, it poses more new and open questions on GAN optimization than it is able to answer; providing theoretical answers is deferred to future studies.[empirical study-NEU], [IMP-NEU]",empirical study,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 7022,"This is not necessarily a bad thing, since the extensive experiments (both toy and real) are well-designed, convincing and comprehensible[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7023,". Novel combinations of GAN formulations (non-saturating with gradient penalties) are evaluated to disentangle the effects of formulation changes.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7024,"Overall, this work is providing useful experimental insights, clearly motivating further study. [experimental insights-POS], [IMP-POS]",experimental insights,,,,,,IMP,,,,,POS,,,,,,POS,,,, 7027,"The idea proposed (learning a selection strategy for choosing a subset of synthesis examples) is good.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7028,"For the most paper, the paper is clearly-written, with each design decision justified and rigorously specified.[design decision-POS], [CLA-POS]",design decision,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7029,"The experiments show that the proposed algorithm allows a synthesizer to do a better job of reliably finding a solution in a short amount of time (though the effect is somewhat small).[experiments-POS, algorithm-POS], [EMP-POS]",experiments,algorithm,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7030,"I do have some serious questions/concerns about this method:[method-NEG], [EMP-NEU]",method,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 7033,"How large can this be expected to scale (a few thousand)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7034,"The paper did not specify how often the neural net must be trained.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7035,"Must it be trained for each new synthesis problem?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7036,"If so, the training time becomes extremely important (and should be included in the ""NN Phase"" time measurements in Figure 4).[Figure-NEU], [EMP-NEU, SUB-NEU]",Figure,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 7037,"If this takes longer than synthesis, it defeats the purpose of using this method in the first place.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7038,"Alternatively, can the network be trained once for a domain, and then used for every synthesis problem in that domain (i.e. in your experiments, training one net for all possible binary-image-drawing problems)?[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7039,"If so, the training time amortizes to some extentu2014can you quantify this?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7040,"These are all points that require discussion which is currently missing from the paper.[discussion-NEG], [SUB-NEG]",discussion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7041,"I also think that this method really ought to be evaluated on some other domain(s) in addition to binary image drawing.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7042,"The paper is not an application paper about inferring drawing programs from images; rather, it proposes a general-purpose method for program synthesis example selection.[paper-NEU], [APR-NEU]",paper,,,,,,APR,,,,,NEU,,,,,,NEU,,,, 7043,"As such, it ought to be evaluated on other types of problems to demonstrate this generality.[problems-NEU], [EMP-NEU]",problems,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7044,"Nothing about the proposed method (e.g. the neural net setup) is specific to images, so this seems quite readily doable.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7045,"Overall: I like the idea this paper proposes,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7046,"but I have some misgivings about accepting it in its current state.[null], [REC-NEG]",null,,,,,,REC,,,,,,,,,,,NEG,,,, 7050,"This was totally unclear until fairly deep into Section 3.[Section-NEG], [CLA-NEG]",Section,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 7053,"On a related note, I don't like the term ""Selection Probability"" for the quantity it describes.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7055,"' That happens to be (as you've proven) a good measure by which to select examples for the synthesizer.[examples-NEU], [EMP-NEU]",examples,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7056,"The first property (correctness) is a more essential property of this quantity, rather than the second (appropriateness as an example selection measure).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7057,"Page 5: ""Figure blah shows our neural network architecture"" - missing reference to Figure 3.[reference-NEG, Figure-NEG], [PNF-NEG]",reference,Figure,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 7058,"Page 5: ""note that we do not suggest a specific neural network architecture for the middle layers, one should select whichever architecture that is appropriate for the domain at hand"" - such as?[Page-NEU, architecture-NEU], [EMP-NEU, SUB-NEG]",Page,architecture,,,,,EMP,SUB,,,,NEU,NEU,,,,,NEU,NEG,,, 7059,"What are some architectures that might be appropriate for different domains?[architectures-NEU], [EMP-NEU]",architectures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7060,"What architecture did you use in your experiments?[architecture-NEU, experiments-NEU], [EMP-NEU]",architecture,experiments,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7061,"The description of the neural net in Section 3.3 (bottom of page 5) is hard to follow on first read-through.[description-NEG, Section-NEG], [CLA-NEG, SUB-NEG]",description,Section,,,,,CLA,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 7062,"It would be better to lead with some high-level intuition about what the network is supposed to do before diving into the details of how it's set up.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7063,"The first sentence on page 6 gives this intuition; this should come much earlier.[page-NEU], [PNF-NEG]",page,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 7064,"Page 5: ""a feed-forward auto-encoder with N input neurons..."" Previously, N was defined as the size of the input domain.[Page-NEU], [EMP-NEU]",Page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7065,"Does this mean that the network can only be trained when a complete set of input-output examples is available (i.e. outputs for all possible inputs in the domain)?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7066,"Or is it fine to have an incomplete example set? [null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7071,"All proposed approaches beat the uniform sampling baselines and the more sophisticated approaches do better in the scenarios with more tasks (one multitask problem had 21 tasks).[proposed approaches-POS], [CMP-POS, EMP-POS]",proposed approaches,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 7072,"Pros: - very promising results with an interesting active learning approach to multitask RL[results-POS, approach-POS], [EMP-POS]",results,approach,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7073,"- a number of approaches developed for the basic idea[approaches-POS], [CMP-POS, EMP-POS]",approaches,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 7074,"- a variety of experiments, on challenging multiple task problems (up to 21 tasks/games)[experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7075,"- paper is overall well written/clear [paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7076,"Cons: - Comparison only to a very basic baseline (i.e. uniform sampling)[baseline-NEG], [CMP-NEG]",baseline,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7077,"Couldn't comparisons be made, in some way, to other multitask work?[comparisons-NEG, work-NEG], [CMP-NEG]",comparisons,work,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 7078,"Additional comments: - The assumption of the availability of a target score goes against the motivation that one need not learn individual networks[assumption-NEG], [CMP-NEG]",assumption,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7079,".. authors say instead one can use 'published' scores, but that only assumes someone else has done the work (and furthermore, published it!).[scores-NEG], [EMP-NEG]",scores,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7080,"The authors do have a section on eliminating the need by doubling an estimate for each task) which makes this work more acceptable (shown for 6 tasks or MT1, compared to baseline uniform sampling).[section-POS, work-POS], [EMP-POS]",section,work,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7081,"Clearly there is more to be done here for a future direction (could be mentioned in future work section).[future work-NEG, section-NEG], [IMP-NEG]",future work,section,,,,,IMP,,,,,NEG,NEG,,,,,NEG,,,, 7082,"- The averaging metrics (geometric, harmonic vs arithmetic, whether or not to clip max score achieved) are somewhat interesting, but in the main paper, I think they are only used in section 6 (seems like a waste of space).[metrics-POS, paper-POS, section-POS], [EMP-POS]",metrics,paper,section,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 7083,"Consider moving some of the results, on showing drawbacks of arithmetic mean with no clipping (table 5 in appendix E), from the appendix to the main paper.[results-NEG, table-NEG, appendix-NEG, paper-NEG], [EMP-NEG, PNF-NEG]",results,table,appendix,paper,,,EMP,PNF,,,,NEG,NEG,NEG,NEG,,,NEG,NEG,,, 7085,"Sections 7.2 and 7.3 on specificity/generality of features were interesting.[Sections-POS], [EMP-POS]",Sections,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7086,"--> Can the authors show that a trained network (via their multitask approached) learns significantly faster on a brand new game (that's similar to games already trained on), compared to learning from scratch?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7087,"--> How does the performance improve/degrade (or the variance), on the same set of tasks, if the different multitask instances (MT_i) formed a supersets hierarchy, ie if MT_2 contained all the tasks/games in MT_1, could training on MT_2 help average performance on the games in MT_1 ?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7088,"Could go either way since the network has to allocate resources to learn other games too.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7089,"But is there a pattern?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7090,"- 'Figure 7.2' in section 7.2 refers to Figure 5.[Figure-NEU, section-NEU], [PNF-NEU]",Figure,section,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 7091,"- Can you motivate/discuss better why not providing the identity of a game as an input is an advantage?[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7092,"Why not explore both possibilities?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7095,"This work to my knowledge is the first to use a DSL closer to a full language.[work-POS], [NOV-POS]",work,,,,,,NOV,,,,,POS,,,,,,POS,,,, 7096,"The paper is very clear and easy to follow.[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 7097,"One way it could be improved is if it were compared with another system.[another system-NEG], [CMP-NEG]",another system,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7098,"The results showing that guided search is a potent combination whose contribution would be made only stronger if compared with existing work.[results-NEU, existing work-NEU], [EMP-NEU, CMP-NEU]]",results,existing work,,,,,EMP,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 7100,"The experiments compare favorably against PPO and A2C baselines on a variety of MuJoCo tasks, although I would appreciate a wall-time comparison as well, as training the crossover network is presumably time-consuming.[experiments-POS], [CMP-POS]",experiments,,,,,,CMP,,,,,POS,,,,,,POS,,,, 7101,"It seems that for much of the paper, the authors could dispense with the genetic terminology altogether - and I mean that as a compliment.[terminology-NEU], [CLA-NEU]",terminology,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 7102,"There are few if any valuable ideas in the field of evolutionary computing and I am glad to see the authors use sensible gradient-based learning for GPO, even if it makes it depart from what many in the field would consider evolutionary computing.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7103,"Another point on terminology that is important to emphasize - the method for training the crossover network by direct supervised learning from expert trajectories is technically not imitation learning but behavioral cloning.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7104,"I would perhaps even call this a distillation network rather than a crossover network.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7105,"In many robotics tasks behavioral cloning is known for overfitting to expert trajectories, but that may not be a problem in this setting as expert trajectories can be generated in unlimited quantities.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7108,"The paper is easy to read,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7109,"although it does not seem to have a main focus (exponential gaps vs. optimisation vs. universal approximation).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7110,"The paper makes a nice contribution to the details of deep neural networks with ReLUs,[paper-POS, contribution-POS], [EMP-POS]",paper,contribution,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7111,"although I find the contributed results slightly overstated.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7112,"The 1d results are not difficult to derive from previous results.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7114,"The optimisation method appears close to brute force and is limited to 2 layers.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7115,"Theorem 3.1 appears to be easily deduced from the results from Montufar, Pascanu, Cho, Bengio, 2014.[Theorem-NEU, results-NEU], [EMP-NEU]",Theorem,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7116,"For 1d inputs, each layer will multiply the number of regions at most by the number of units in the layer, leading to the condition w' geq w^{k/k'}.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7117,"Theorem 3.2 is simply giving a parametrization of the functions, removing symmetries of the units in the layers.[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7118,"In the list at the top of page 5. Note that, the function classes might be characterized in terms of countable properties, such as the number of linear regions as discussed in MPCB, but still they build a continuum of functions.[page-NEU], [EMP-NEU]",page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7119,"Similarly, in page 5 ``Moreover, for fixed n,k,s, our functions are smoothly parameterized''.[page-NEU], [EMP-NEU]",page,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7121,"In the last paragraph of Section 3 ``m w^k-1'' This is a very big first layer.[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7122,"This also seems to subsume the first condition, sgeq w^k-1 +w(k-1) for the network discussed in Theorem 3.9.[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7128,"It would be interesting to consider also a single construction, instead of the composition of two constructions..[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7129,"Theorem 3.9 (ii) it would be nice to have a construction where the size becomes 2m + wk when k' k..[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7130,"Section 4, while interesting, appears to be somewhat disconnected from the rest of the paper..[Section-NEG], [EMP-NEG]",Section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7131,"In Theorem 2.3. explain why the two layer case is limited to n 1..[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7132,"At some point in the first 4 pages it would be good to explain what is meant by ``hard'' functions (e.g. functions that are hard to represent, as opposed to step functions, etc.) .[pages-NEU], [EMP-NEU]",pages,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7142,". Comments ------------- - The overall idea of the paper, learning how to optimize, is very seducing and the experimental evaluations (comparison to normal optimizers and other meta-learners) tend to conclude the proposed method is able to learn the behavior of an optimizer and to generalize to unseen problems.[idea-POS, experimental evaluations-NEU, proposed method-NEU], [EMP-POS]",idea,experimental evaluations,proposed method,,,,EMP,,,,,POS,NEU,NEU,,,,POS,,,, 7143,"- Materials of the paper sometimes appear tedious to follow, mainly in sub-sections 3.4 and 3.5.[subsections-NEG], [CLA-NEG]",subsections,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 7144,"It would be desirable to sum up the overall procedure in an algorithm.[algorithm-NEG], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 7145,"Page 5, the term $omega$ intervening in the definition of the policy $pi$ is not defined.[Page-NEG], [EMP-NEU]",Page,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 7146,"- The definitions of the statistics and features (state and observation features) look highly elaborated.[definitions-POS], [EMP-NEU]",definitions,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 7147,"Can authors provide more intuition on these precise definitions?[intuition-NEU], [EMP-NEU]",intuition,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7148,"How do they impact for instance changing the time range in the definition of $Phi$) in the performance of the meta-learner?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7149,"- Figures 3 and 4 illustrate some oscillations of the proposed approach.[Figures-NEU, proposed approach-NEU], [PNF-NEU]",Figures,proposed approach,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 7150,"Which guarantees do we have that the algorithm will not diverge as L2LBGDBGD does?[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7151,"How long should be the training to ensure a good and stable convergence of the method?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7152,"- An interesting experience to be conducted and shown is to train the meta-learner on another dataset (CIFAR for example) and to evaluate its generalization ability on the other sets to emphasize the effectiveness of the method.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7165,"I find it nice how they benefited from context (left context and right context) by solving a fill-in-the-blank task at training time and translating this into text generation at test time.[task-NEU], [EMP-POS]",task,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 7166,"--The experiments were well carried through and very thorough.[experiments-POS], [EMP-POS, SUB-POS]",experiments,,,,,,EMP,SUB,,,,POS,,,,,,POS,POS,,, 7167,"--I second the decision of passing the masked sequence to the generator's encoder instead of the unmasked sequence.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7168,"I first thought that performance would be better when the generator's encoder uses the unmasked sequence.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7169,"Passing the masked sequence is the right thing to do to avoid the mismatch between training time and test time.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7170,"Cons and negative remarks: --There is a lot of pre-training required for the proposed architecture.[proposed architecture-NEU], [EMP-NEU]",proposed architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7171,"There is too much pre-training. I find this less elegant. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7172,"--There were some unanswered questions: (1) was pre-training done for the baseline as well?[baseline-NEU], [EMP-NEU]",baseline,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7175,"(3) it was not made very clear whether the discriminator also conditions on the unmasked sequence. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7176,"It needs to but that was not explicit in the paper.[null], [CLA-NEU]",null,,,,,,CLA,,,,,,,,,,,NEU,,,, 7177,"--Very minor: although it is similar to the generator, it would have been nice to see the architecture of the discriminator with example input and output as well.[architecture-NEU], [EMP-NEU]",architecture,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7178,"Suggestion: for the IMDB dataset, it would be interesting to see if you generate better sentences by conditioning on the sentiment as well.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7187,"This paper defines and examines an interesting cooperative problem: Assignment and control of agents to move to certain squares under ""physical"" constraints.[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7188,"The authors propose a centralized solution to the problem by adapting the Deep Q-learning Network model.[solution-POS], [EMP-POS]",solution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7189,"I do not know whether using a centralized network where each agent has a window of observations is a novel algorithm.[novel-NEG], [NOV-NEG]",novel,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7190,"The manuscript itself makes it difficult to assess (more on this later).[manuscript-NEG], [EMP-NEG]",manuscript,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7191,"If it were novel, it would be an incremental development.[novel-NEU], [NOV-NEU]",novel,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 7192,"They assess their solution quantitatively, demonstrating their model performs better than first, a simple heuristic model (I believe de-centralized Dijkstra's for each agent, but there is not enough description in the manuscript to know for sure), and then, two other baselines that I could not figure out from the manuscript (I believe it was Dijkstra's with two added rules for when to recharge).[solution-NEU, description-NEG, manuscript-NEG], [CMP-NEG, EMP-NEG]",solution,description,manuscript,,,,CMP,EMP,,,,NEU,NEG,NEG,,,,NEG,NEG,,, 7194,"I do not believe it should be accepted for the following reasons.[accepted-NEG], [REC-NEG]",accepted,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 7195,"First, the manuscript is poorly written, to the point where it has inhibited my ability to assess it.[manuscript-NEG], [CLA-NEG]",manuscript,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 7196,"Second, given its contribution, the manuscript is better suited for a conference specific to multi-agent decision-making.[manuscript-NEG], [APR-NEG]",manuscript,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 7197,"There are a few reasons for this. 1) I was not convinced that deep Q-learning was necessary to solve this problem.[problem-NEG], [EMP-NEG]",problem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7198,"The manuscript would be much stronger if the authors compared their method to a more sophisticated baseline, for example having each agent be a simple Q-learner with no centralization or ""deepness"".[manuscript-NEG, baseline-NEG], [CMP-NEG]",manuscript,baseline,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 7199,"This would solve another issue, which is the weakness of their baseline measure.[issue-NEG], [CMP-NEG]",issue,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7200,"There are many multi-agent techniques that can be applied to the problem that would have served as a better baseline.[techniques-NEG, baseline-NEG], [CMP-NEG]",techniques,baseline,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 7201,"2) Although the problem itself is interesting,[problem-POS], [EMP-POS]",problem,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7202,"it is a bit too applied and specific to the particular task they studied than is appropriate for a conference with as broad interests as ICLR.[appropriate-NEG], [APR-NEG]",appropriate,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 7203,"It also is a bit simplistic (I had expected the agents to at least need to learn to move the customer to some square rather than get reward and move to the next job from just getting to the customer's square).[It-NEG], [EMP-NEG]",It,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7204,"Can you apply this method to other multi-agent problems?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7205,"How would it compare to other methods on those problems?[other methods-NEU], [CMP-NEU]",other methods,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7206,"I encourage the authors to develop the problem and method further, as well as the analysis and evaluation.[problem-NEU, method-NEU, analysis-NEU, evaluation-NEU], [SUB-NEU]]",problem,method,analysis,evaluation,,,SUB,,,,,NEU,NEU,NEU,NEU,,,NEU,,,, 7210,"Combined with variance preserving initialization scheme, authors empirically observe that the bipolar ReLU allows to better preserve the mean and variance of the activations through training compared to regular ReLU for a deep stacked RNN.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7212,"They show that bipolar activations allow to train deeper RNN (up to some limit) and leads to better generalization performances compared to the ReLU /ELU activation functions.[performances-POS], [CMP-POS, EMP-POS]",performances,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 7215,"What is the difference between the left and right plots?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7216,"- In Table 1, we observe that ReLU-RNN (and BELU-RNN for very deep stacked RNN) leads to worst validation performances.[Table-NEU, performances-NEG], [EMP-NEU]",Table,performances,,,,,EMP,,,,,NEU,NEG,,,,,NEU,,,, 7217,"It would be nice to report the training loss to see if this is an optimization or a generalization problem.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7218,"- How does bipolar activation compare to model train with BN on CIFAR10?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7219,"- Did you try bipolar activation function for gated recurrent neural networks for LSTM or GRU?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7222,"Do you know why the trend is not consistent across datasets?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7223,"-Clarity/Quality The paper is well written and pleasant to read.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7224,"- Originality: Self-normalizing function have been explored also in scaled ELU, however the application of self-normalizing function to RNN seems novel.[Originality-POS], [NOV-POS]",Originality,,,,,,NOV,,,,,POS,,,,,,POS,,,, 7225,"- Significance: Activation function is still a very active research topic and self-normalizing function could potentially be impactful for RNN given that the normalization approaches (batch norm, layer norm) add a significant computational cost.[Significance-POS], [IMP-POS]",Significance,,,,,,IMP,,,,,POS,,,,,,POS,,,, 7227,"However, the stacked RNN with bipolar activation are not competitive regarding to other recurrent architectures.[architectures-NEU], [EMP-NEG]",architectures,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 7228,"It is not clear what are the advantage of deep stacked RNN in that context.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 7232,"The overall algorithm is very simple to implement and can do reasonably well on some simple control tasks, but quickly gets overwhelmed by higher-dimensional and stochastic environments.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7234,"I am sure this idea has been tried before in the 90s but I am not familiar enough with all the literature to find it (A quick google search brings this up: Reinforcement Learning of Active Recognition Behaviors, with a chapter on nearest-neighbor lookup for policies: https://people.eecs.berkeley.edu/~trevor/papers/1997-045/node3.html).[idea-NEG], [NOV-NEG]",idea,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7235,"Although I believe there is work to be done in the current round of RL research using nearest neighbor policies, I don't believe this paper delves very far into pushing new ideas (even a simple adaptive distance metric could have provided some interesting results, nevermind doing a learned metric in a latent space to allow for rapid retrainig of a policy on new domains....),[ideas-NEG], [EMP-NEG]",ideas,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7236,"and for that reason I don't think it has a place as a conference paper at ICLR.[conference paper-NEG], [APR-NEG]",conference paper,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 7237,"I would suggest its submission to a workshop where it might have more use triggering discussion of further work in this area.[further work-NEU], [APR-NEU]",further work,,,,,,APR,,,,,NEU,,,,,,NEU,,,, 7239,"Quality The paper is well-written and clear, and includes relevant comparisons to previous work (NPI and recursive NPI).[paper-POS, comparisons-POS], [CLA-POS, CMP-POS]",paper,comparisons,,,,,CLA,CMP,,,,POS,POS,,,,,POS,POS,,, 7240,"Clarity The paper is clearly written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7241,"Originality To my knowledge the method proposed in this work is novel.[method-POS], [NOV-POS]",method,,,,,,NOV,,,,,POS,,,,,,POS,,,, 7242,"It is the first to study constructing minimal training sets for NPI given a black-box oracle.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 7244,"Significance The work could be potentially significant,[work-NEU], [IMP-POS]",work,,,,,,IMP,,,,,NEU,,,,,,POS,,,, 7245,"but there are some very strong assumptions made in the paper that could limit the impact.[assumptions-NEG], [IMP-NEG]",assumptions,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 7246,"If the NPI has access to a black-box oracle, it is not clear what is the use of training an NPI in the first place.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7247,"It would be very helpful to describe a potential scenario where the proposed approach could be useful.[proposed approach-NEU], [SUB-NEU]",proposed approach,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7248,"Also, it is assumed that the number of possible inputs is finite (also true for the recursive NPI paper), and it is not clear what techniques or lessons of this paper might transfer to tasks with perceptual inputs.[techniques-NEG], [SUB-NEG, EMP-NEG]",techniques,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 7249,"The main technical contribution is the search procedure to find minimal training sets and pare down the observation size, and the empirical validation of the idea on several algorithmic tasks.[technical contribution-NEU], [EMP-NEU]",technical contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7250,"Pros - Greatly improves the data efficiency of recursive NPI.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7251,"- Training and verification sets are automatically generated by the proposed method.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7252,"Cons - Requires access to a black-box oracle to construct the dataset.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7253,"- Not clear that the idea will be useful in more complex domains with unbounded inputs. [idea-NEG], [IMP-NEG]",idea,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 7257,"The authors try different methods of aggregating attention for the decoder copy mechanism and find that summing token probabilities works significantly better than alternatives; this result could be useful beyond just Seq2SQL models (e.g., for summarization).[result-POS], [CMP-POS, EMP-POS]",result,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 7258,"Experiments on the WikiSQL dataset demonstrate state-of-the-art results, and detailed ablations measure the impact of each component of the model. [Experiments-POS, results-POS, model-POS], [EMP-POS]",Experiments,results,model,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 7259,"Overall, even though the architecture is not very novel,;[architecture-NEG], [NOV-NEG]",architecture,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7260,"the paper is well-written and the results are strong;[paper-POS, results-POS], [CLA-POS]",paper,results,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 7261,"as such, I'd recommend the paper for acceptance.[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 7262,"Some questions: - How can the proposed approach scale to more complex queries (i.e., those not found in WikiSQL)?[proposed approach-NEU], [EMP-NEU]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7263,"Could the output grammar be extended to support joins, for instance? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7264,"As the grammar grows more complex, the typed decoder may start to lose its effectiveness.Some discussion of these issues would be helpful.[discussion-NEU], [SUB-NEU]",discussion,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7265,"- How does the additional preprocessing done by the authors affect the performance of the original baseline system of Zhong et al.?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7266,"In general, some discussion of the differences in preprocessing between this work and Zhong et al. would be good (do they also use column annotation)?[discussion-NEU, work-NEU], [CMP-NEU]",discussion,work,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 7272,"Unfortunately, they are not able to show any types of synthetic noise helping address natural noise.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7273,"However, they are able to show that a system trained on a mixture of error types is able to perform adequately on all types of noise.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7274,"This is a thorough exploration of a mostly under-studied problem.[null], [SUB-POS]",null,,,,,,SUB,,,,,,,,,,,POS,,,, 7275,"The paper is well-written and easy to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7276,"The authors do a good job of positioning their study with respect to related work on black-box adversarial techniques, but overall, by working on the topic of noisy input data at all, they are guaranteed novelty.[related work-POS, novelty-POS], [EMP-POS, NOV-POS]",related work,novelty,,,,,EMP,NOV,,,,POS,POS,,,,,POS,POS,,, 7277,"The inclusion of so many character-based systems is very nice, but it is the inclusion of natural sources of noise that really makes the paper work.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7278,"Their transplanting of errors from other corpora is a good solution to the problem, and one likely to be built upon by others.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7279,"In terms of negatives, it feels like this work is just starting to scratch the surface of noise in NMT.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7280,"The proposed meanChar architecture doesn't look like a particularly good approach to producing noise-resistant translation systems, and the alternative solution of training on data where noise has been introduced through replacement tables isn't extremely satisfying.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7281,"Furthermore, the use of these replacement tables means that even when the noise is natural, it's still kind of artificial.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7282,"Finally, this paper doesn't seem to be a perfect fit for ICLR, as it is mostly experimental with few technical contributions that are likely to be impactful; it feels like it might be more at home and have greater impact in a *ACL conference.[paper-NEG, technical contributions-NEG], [IMP-NEG, APR-NEG]",paper,technical contributions,,,,,IMP,APR,,,,NEG,NEG,,,,,NEG,NEG,,, 7283,"Regarding the artificialness of their natural noise - obviously the only solution here is to find genuinely noisy parallel data, but even granting that such a resource does not yet exist, what is described here feels unnaturally artificial.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7284,"First of all, errors learned from the noisy data sources are constrained to exist within a word.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7285,"This tilts the comparison in favour of architectures that retain word boundaries (such as the charCNN system here), while those systems may struggle with other sources of errors such as missing spaces between words.[comparison-NEU, architectures-NEU], [EMP-NEU]",comparison,architectures,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7287,"This seems worse than estimating the frequency of the error and applying them stochastically (or trying to learn when an error is likely to occur).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7288,"I feel like these issues should at least be mentioned in the paper, so it is clear to the reader that there is work left to be done in evaluating the system on truly natural noise. [issues-NEG], [SUB-NEG]",issues,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7289,"Also, it is somewhat jarring that only the charCNN approach is included in the experiments with noisy training data (Table 6).[experiments-NEG, experiments-NEU], [EMP-NEU]",experiments,experiments,,,,,EMP,,,,,NEG,NEU,,,,,NEU,,,, 7290,"I realize that this is likely due to computational or time constraints, but it is worth providing some explanation in the text for why the experiments were conducted in this manner.[explanation-NEU, text-NEU, experiments-NEU], [SUB-NEU]",explanation,text,experiments,,,,SUB,,,,,NEU,NEU,NEU,,,,NEU,,,, 7291,"On a related note, the line in the abstract stating that ""... a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise"" implies that the other (non-charCNN) architectures could not learn these representations, when in reality, they simply weren't given the chance.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7292,"Section 7.2 on the richness of natural noise is extremely interesting,[Section-POS], [EMP-POS]",Section,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7293,"but maybe less so to an ICLR audience.[null], [APR-NEG, IMP-NEG]",null,,,,,,APR,IMP,,,,,,,,,,NEG,NEG,,, 7294,"From my perspective, it would be interesting to see that section expanded, or used as the basis for future work on improve architectures or training strategies.[section-NEU, future work-NEU], [SUB-NEG, IMP-NEG]",section,future work,,,,,SUB,IMP,,,,NEU,NEU,,,,,NEG,NEG,,, 7302,"They distinguish theirs approaches into 1) structured updates and 2) sketched updates.[approaches-NEU], [CMP-NEU]",approaches,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7308,"The major contribution of this paper is their experimental section, where the authors show the effects of training with structured, or sketched updates, in terms of reduced communication cost, and the effect on the training accuracy.[contribution-NEU, experimental section-POS, accuracy-POS], [EMP-POS]",contribution,experimental section,accuracy,,,,EMP,,,,,NEU,POS,POS,,,,POS,,,, 7309,"They present experiments on several data sets, and observe that among all the techniques, random quantization can have a significant reduction of up to 32x in communication with minimal loss in accuracy.[experiments-POS, accuracy-POS], [EMP-POS]",experiments,accuracy,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7310,"My main concern about this paper is that although the presented techniques work well in practice,[techniques-POS], [EMP-POS]",techniques,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7311,"some of the algorithms tested are similar algorithms that have already been proven to work well in practice.[algorithms-NEU, practice-NEU], [CMP-NEU]",algorithms,practice,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 7314,"Although the authors cite QSGD, they do not directly compare against it in experiments.[experiments-NEG], [CMP-NEG]",experiments,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7315,"As a matter of fact, one of the issues of the presented quantized techniques (the fact that random rotations might be needed when the dynamic range of elements is large, or when the updates are nearly sparse) is easily resolved by algorithms like QSGD and Terngrad that respect (and promote) sparsity in the updates.[issues-NEG, algorithms-NEU], [CMP-NEU]",issues,algorithms,,,,,CMP,,,,,NEG,NEU,,,,,NEU,,,, 7316,"A more minor comment is that it is unclear that averaging is the right way to combine locally trained models for nonconvex problems.[models-NEG], [EMP-NEG]",models,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7319,"Another minor comment: The legends in the figures are tiny, and really hard to read.[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7320,"Overall this paper examines interesting structured and randomized low communication updates for distributed FL,[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7321,"but lacks some important experimental comparisons.[experimental comparisons-NEG], [CMP-NEG]",experimental comparisons,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7328,"Specifically, the method tries to discover feature representations, which are invariance in different domains, by minimizing the re-weighted empirical risk and distributional shift between designs.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7329,"Overall, the paper is well written and organized with good description on the related work, research background, and theoretic proofs.[paper-POS, description-POS, related work-POS, research background-POS, theoretic proofs-POS], [CLA-POS, EMP-POS]",paper,description,related work,research background,theoretic proofs,,CLA,EMP,,,,POS,POS,POS,POS,POS,,POS,POS,,, 7330,"The main contribution can be the idea of learning a sample re-weighting function, which is highly important in domain shift.[main contribution-POS], [EMP-POS]",main contribution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7331,"However, as stated in the paper, since the causal effect of an intervention T on Y conditioned on X is one of main interests, it is expected to add the related analysis in the experiment section.[paper-NEU, related analysis-NEU, experiment section-NEU], [SUB-NEU]]",paper,related analysis,experiment section,,,,SUB,,,,,NEU,NEU,NEU,,,,NEU,,,, 7333,"This in itself is not even new, but the authors replace a linear output layer with squared error (proposed in another, earlier paper) by a softmax layer with cross-entropy.[null], [NOV-NEU]",null,,,,,,NOV,,,,,,,,,,,NEU,,,, 7334,"Unsuprisingly, this leads to an improvement.[improvement-NEU], [EMP-NEU]",improvement,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7335,"The title is misleading.[title-NEG], [CLA-NEG]",title,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 7336,"There is nothing deep in this architecture.[architecture-NEG], [SUB-NEG]",architecture,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7337,"It is a shallow architecture with a single RBF-like hidden layer.[architecture-NEG], [SUB-NEG]",architecture,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7338,"There is a tiny ounce of novelty in that the authors propose to improve a supervised version of the SOM by using what should have been used in the first place according to modern good practice.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7340,"Another misleading thing is the term self-organizing used throughout, which is roughly synonym to learning according to me, and not something uniquely belonging to the SOM family of models, as used by the authors.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7341,"As an example of time-travel to the past, the authors talk about RBMs and stacks of auto-encoders as if that was the deep learning state-of-the-art.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7342,"The authors even call these methods 'recent'! Clearly not the case. [null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 7343,"Unfortunately, it's not just talk, they are also the point of comparison in the experiments, i.e., there are no comparison with modern deep learning methods.[experiments-NEG], [CMP-NEG]",experiments,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7344,"Even the datasets are outdated (from the 90s?).[datasets-NEG], [NOV-NEG]",datasets,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7345,"Vocabulary is wrong in other places, for example the word semi-supervised is wrongly understood and used. [Vocabulary-NEG], [CLA-NEG]",Vocabulary,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 7347,"Where the label 'semi-supervised' is used (page 4) is actually wrong: yes the labels are used, but of course it is the *gradients* which show up in the update, not the labels themselves directly.[label-NEU, page-NEU], [EMP-NEU]",label,page,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7348,"It's also not true that there is little research in understanding the formation of internal representations.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7349,"There is a whole subfields of papers trying to interpret the features learned by deep networks, and much work designing learning frameworks and objectives to achieve better representations, e.g, to better disentangle the underlying factors.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7351,"The paper uses much space to show how to compute gradients in the proposed architecture: there is obviously no need for this in a day and age where gradients are automatically derived by software.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7352,"The cherry on the sundae are the experimental results.[experimental results-NEG], [EMP-POS]",experimental results,,,,,,EMP,,,,,NEG,,,,,,POS,,,, 7353,"How could the authors get 16% on MNIST with an MLP of any kind?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7354,"It does not seem right at all.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7355,"Even a linear regression would get at least half of that.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7356,"As there are not enough experimental details to judge, it's hard to figure out the problem, but this ppaper is clearly not publishable at any of the quality machine learning venues, for weakness in originality, quality of the writing, and poor experiments. [experimental details-NEG, originality-NEG, writing-NEG, experiments-NEG], [CLA-NEG, PNF-NEG, NOV-NEG, REC-NEG, APR-NEG]",experimental details,originality,writing,experiments,,,CLA,PNF,NOV,REC,APR,NEG,NEG,NEG,NEG,,,NEG,NEG,NEG,NEG,NEG 7358,"The paper leaves me guessing which part is a new contribution, and which one is already possible with conceptors as described in the Jaeger 2014 report.[new contribution-NEG], [NOV-NEG, CMP-NEG]",new contribution,,,,,,NOV,CMP,,,,NEG,,,,,,NEG,NEG,,, 7359,"Figure (1) in the paper is identical to the one in the (short version of) the Jaeger report but is missing an explicit reference.[paper-NEG, reference-NEG], [NOV-NEG, CMP-NEG]",paper,reference,,,,,NOV,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 7360,"Figure 2 is almost identical, again a reference to the original would be better.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7361,"Conceptors can be trained with a number of approaches (as described both in the 2014 Jaeger tech report and in the JMLR paper), including ridge regression.[approaches-NEU], [CMP-NEG, EMP-NEG]",approaches,,,,,,CMP,EMP,,,,NEU,,,,,,NEG,NEG,,, 7362,"What I am missing here is a clear indication what is an original contribution of the paper, and what is already possible using the original approach.[original contribution-NEG], [NOV-NEG]",original contribution,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7363,"The fact that additional conceptors can be trained does not appear new for the approach described here.[approach-NEG], [NOV-NEG, EMP-NEG]",approach,,,,,,NOV,EMP,,,,NEG,,,,,,NEG,NEG,,, 7364,"If the presented approach was an improvement over the original conceptors, the evaluation should compare the new and the original version.[presented approach-NEG, evaluation-NEU], [CMP-NEG, EMP-NEG]",presented approach,evaluation,,,,,CMP,EMP,,,,NEG,NEU,,,,,NEG,NEG,,, 7365,"The evaluation also leaves me a little confused in an additional dimension: the paper title and abstract suggested that the contribution is about overcoming catastrophic forgetting.[evaluation-NEG, title-NEG, abstract-NEG], [PNF-NEG]",evaluation,title,abstract,,,,PNF,,,,,NEG,NEG,NEG,,,,NEG,,,, 7366,"The evaluation shows that the approach performs better classifying MNIST digits than another approach.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7367,"This is nice but doesn't really tell me much about overcoming catastrophic forgetting. [null], [SUB-NEG]]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7371,"- The paper proposes a (I believe) novel method to obtain visual explanations.[paper-POS, method-POS], [NOV-POS]",paper,method,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 7372,"The results are visually compelling[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7373,"although most results are shown on a medical dataset - which I feel is very hard for most readers to follow.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7374,"The MNIST explanations help a lot.[explanations-NEU], [SUB-POS]",explanations,,,,,,SUB,,,,,NEU,,,,,,POS,,,, 7375,"It would be great if the authors could come up with an additional way to demonstrate their method to the non-medical reader.[method-NEU], [IMP-NEU]",method,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 7376,"- The paper shows that the results are plausible using a neat trick.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7377,"The authors train their system with the testdata included which leads to very different visualizations.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7378,"It would be great if this analysis could be performed for MNIST as well.[analysis-NEU], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 7381,"minor comments: - some figures with just two parts are labeled from left to right - it would be better to just write left: ... right: ...[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7382,"n- figure 2: do these images correspond to each other?[figure-NEG], [PNF-NEU]",figure,,,,,,PNF,,,,,NEG,,,,,,NEU,,,, 7383,"If yes, it would be good to show them pairwise. - figure 5: please explain why the saliency map is relevant.[figure-NEU], [SUB-NEU]",figure,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7387,"The authors argue that the success of these simple approaches on these tasks suggest that more changing problems need to be used to assess new RL algorithms.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7388,"This paper is clearly written[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7389,"and it is important to compare simple approaches on benchmark problems[benchmark-NEG], [CMP-NEG]",benchmark,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7391,"However, the originality and significance of this work is a significant drawback.[originality-NEG, significance-NEG], [NOV-NEG, IMP-NEG]",originality,significance,,,,,NOV,IMP,,,,NEG,NEG,,,,,NEG,NEG,,, 7393,"(and probably much further). So the algorithms themselves are not particularly novel, and are limited to nearly-deterministic domains with either single sparse rewards (success or failure rewards) or introducing extra hyper-parameters per task.[algorithms-NEG], [EMP-NEG, NOV-NEG]",algorithms,,,,,,EMP,NOV,,,,NEG,,,,,,NEG,NEG,,, 7394,"The significance of this work would still be quite strong if, as the author's suggest, these benchmarks were being widely used to assess more sophisticated algorithms and yet these tasks were mastered by such simple algorithms with no learnable parameters. [significance-NEU, benchmarks-NEU], [IMP-NEU, EMP-NEU]",significance,benchmarks,,,,,IMP,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 7396,"Even if we ignore that for most tasks only the sparse reward (which favors this algorithm) version was examined, these author's only demonstrate success on 4, relatively simple tasks.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7397,"While these simple tasks are useful for diagnostics, it is well-known that these tasks are simple and, as the author's suggest more challenging tasks .... are necessary to properly assess advances made by sophisticated, optimization-based policy algorithms. [tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7408,"So the use of cross-task transfer performance and the task clustering approach can only capture positive correlations between tasks but ignore the negative task relations which are also important to the sharing among tasks in multi-task learning.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7409,"Problem (2) is identical to robust PCA and Theorem 3.1 is common in matrix completion literature.[Problem-NEG, Theorem-NEG], [NOV-NEG]",Problem,Theorem,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 7410,"I don't see much novelty.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7411,"Appendix A seems obvious but it cannot prove the validity of the assumption made in problem (2).[Appendix-NEG, assumption-NEG], [EMP-NEG]",Appendix,assumption,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 7413,"I don't know whether the low-rank structure does exist in the cross-task transfer performance or not.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7414,"The two parts in this paper are not new.[parts-NEG, paper-NEG], [NOV-NEG]",parts,paper,,,,,NOV,,,,,NEG,NEG,,,,,NEG,,,, 7415,"The combination of the two parts seems a bit incremental and does not bring much novelty.[parts-NEG], [NOV-NEG]]",parts,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7416,"1. This is a good paper, makes an interesting algorithmic contribution in the sense of joint clustering-dimension reduction for unsupervised anomaly detection 2.[paper-POS, algorithmic contribution-POS], [EMP-POS, IMP-POS]",paper,algorithmic contribution,,,,,EMP,IMP,,,,POS,POS,,,,,POS,POS,,, 7417,"It demonstrates clear performance improvement via comprehensive comparison with state-of-the-art methods 3.[performance-POS], [EMP-POS]",performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7418,"Is the number of Gaussian Mixtures 'K' a hyper-parameter in the training process?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7419,"can it be a trainable parameter?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7420,"4. Also, it will be interesting to get some insights or anecdotal evidence on how the joint learning helps beyond the decoupled learning framework, such as what kind of data points (normal and anomalous) are moving apart due to the joint learning [insights-NEU, anecdotal evidence-NEU], [SUB-NEU]]",insights,anecdotal evidence,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 7421,"The authors present a novel evolution scheme applied to neural network architecture search.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 7427,"They find complex cells that lead to state-of-the-art performance on benchmark dataset CIFAR-10 and ImageNet.[performance-POS], [EMP-POS]",performance,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7429,"The method proposed for an hierarchical representation for optimizing over neural network designs is well thought and sound.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7430,"It could lead to new insight on automating design of neural networks for given problems.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 7431,"In addition, the authors present results that appear to be on par with the state-of-the-art with architecture search on CIFAR-10 and ImageNet benchmark datasets.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7432,"The paper presents a good work and is well articulated.[paper-POS], [EMP-POS, CLA-POS]",paper,,,,,,EMP,CLA,,,,POS,,,,,,POS,POS,,, 7433,"However, it could benefit from additional details and a deeper analysis of the results.[details-NEG, analysis-NEG, results-NEG], [SUB-NEG]",details,analysis,results,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 7434,"The key idea is a smart evolution scheme.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7438,"Thought, the paper could benefit from a more detailed analysis of the architectures found by the algorithm.[paper-NEG, analysis-NEG], [SUB-NEG]",paper,analysis,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 7441,"The authors should try to give their opinion about the design obtained.[opinion-NEG], [SUB-NEG]",opinion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7442,"The implementation seems technically sound. The experiments and results section shows that the authors are confident and the evaluation seems correct. However, paragraphs on the architectures could be a bit clearer for the reader. The diagram could be more complete and reflect better the description.[implementation-POS, experiments-POS, result-POS, paragraphs-NEG, diagram-NEG], [EMP-POS, CLA-NEG, PNF-NEG]",implementation,experiments,result,paragraphs,diagram,,EMP,CLA,PNF,,,POS,POS,POS,NEG,NEG,,POS,NEG,NEG,, 7443,"During evaluation, what is a step? A batch or an epoch or other?[evaluation-NEU], [EMP-NEU]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7444,"The method seems relatively efficient as it took 36 hours to converge in a field traditionally considered as heavy in terms of computation, but at the requirement of using 200 GPU.[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7445,"It raises questions on the usability of the method for small labs.[method-NEG], [IMP-NEG]",method,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 7446,"At some point, we will have to use insights from this search to stop early, when no improvement is expected.[insights-NEU], [IMP-NEU]",insights,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 7448,"This should be supported by some quantitative results.[quantitative results-NEG], [SUB-NEG]",quantitative results,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7449,"The paper would greatly benefit from a deeper comparison over other techniques.[paper-NEG, comparison-NEG, other techniques-NEG], [SUB-NEG]",paper,comparison,other techniques,,,,SUB,,,,,NEG,NEG,NEG,,,,NEG,,,, 7452,"It could have taken more spaces in the paper.[spaces-NEG], [PNF-NEG]",spaces,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7453,"I am also concerned the computational efficiency of the results obtained with this method on current processors.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7454,"Indeed, the randomness of the found cells could be less efficient in terms of computation that what we can get from a well-structured network designed by hand.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7455,"Exploiting the structure of the GPUs (cache size, sequential accesses, etc.) allows to get best possible performance from the hardware at hand.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7456,"Does the solution obtained with the optimization can be run as efficiently?[solution-NEU], [EMP-NEU]",solution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7457,"A short analysis forward pass time of optimized cells vs. popular models could be an interesting addition to the paper.[analysis-NEG, paper-NEG], [SUB-NEG]",analysis,paper,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 7460,"The paper is an extension of Kawaguchi'16.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 7462,"I think the main technical concerns with the paper is that the technique only applies to a linear model, and it doesn't sound the techniques are much beyond Kawaguchi'16.[technique-NEG], [EMP-NEG]",technique,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7463,"I am happy to see more papers on linear models, but I would expect there are more conceptual or technical ingredients in it.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7464,"As far as I can see, the same technique here will fail for non-linear models for the same reason as Kawaguchi's technique.[technique-NEG], [CMP-NEG]",technique,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7465,"Also, I think a more interesting question might be turning the landscape results into an algorithmic result --- have an algorithm that can guarantee to converge a global minimum.[question-NEU, results-NEU, algorithm-NEU], [IMP-NEU, EMP-NEU]",question,results,algorithm,,,,IMP,EMP,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 7466,"This won't be trivial because the deep linear networks do have a lot of very flat saddle points and therefore it's unclear whether one can avoid those saddle points. [null], [EMP-NEG]]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7472,"The method is novel, and the paper is generally well written.[method-POS, paper-POS], [CLA-POS, NOV-POS]",method,paper,,,,,CLA,NOV,,,,POS,POS,,,,,POS,POS,,, 7473,"I unfortunately have several issues with the paper in its current form, most importantly around the experimental comparisons.[experimental comparisons-NEG], [CMP-NEG]",experimental comparisons,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7474,"The paper is severely weakened by not comparing experimentally to other learning (hierarchical) schemes, such as options or HAMs.[paper-NEG], [CMP-NEG]",paper,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7475,"None of the comparisons in the paper feature any learning.[comparisons-NEG], [CMP-NEG]",comparisons,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7476,"Ideally, one should see the effect of learning with options (and not primitive actions) to fairly compare against the proposed framework.[proposed framework-NEG], [CMP-NEG]",proposed framework,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7477,"At some level, I question whether the proposed framework is doing any more than just value function propagation at a task level, and these experiments would help resolve this.[proposed framework-NEU, experiments-NEU], [EMP-NEU]",proposed framework,experiments,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7478,"Additionally, the example domain makes no sense.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7479,"Rather use something more standard, with well-known baselines, such as the taxi domain.[baselines-NEU], [EMP-NEU]",baselines,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7480,"I would have liked to see a discussion in the related work comparing the proposed approach to the long history of reasoning with subtasks from the classical planning literature, notably HTNs.[proposed approach-NEU], [CMP-NEU]",proposed approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7481,"I found the description of the training of the method to be rather superficial, and I don't think it could be replicated from the paper in its current level of detail.[description-NEG], [EMP-NEG]",description,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7482,"The approach raises the natural questions of where the tasks and the task graphs come from.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7483,"Some acknowledgement and discussion of this would be useful.[acknowledgement-NEG, discussion-NEG], [SUB-NEG]",acknowledgement,discussion,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 7484,"The legend in the middle of Fig 4 obscures the plot (admittedly not substantially).[legend-NEG, Fig-NEG], [PNF-NEG]",legend,Fig,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 7485,"There are also a number of grammatical errors in the paper, including the following non-exhaustive list: 2: as well as how to do -> as well as how to do it[errors-NEG, paper-NEG], [PNF-NEG]",errors,paper,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 7486,"Fig 2 caption: through bottom-up -> through a bottom-up 3: Let S be a set of state -> Let S be a set of states 3: form of task graph -> form of a task graph[Fig-NEG], [PNF-NEG]",Fig,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7487,"3: In addtion -> In addition 4: which is propagates -> which propagates 5: investigated following -> investigated the following[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 7491,"The theoretical result of the ProxProp considers the full batch, and it can not be easily extended to the stochastic variant (mini-batch).[theoretical result-NEU], [EMP-NEU]",theoretical result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7492,"The reason is that the gradient of proximal is evaluated at the future point, and different functions will have different future points.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7494,"In the numerical experiment, the parameter tau_theta is sensitive to the final solution.[experiment-NEU], [EMP-NEU]",experiment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7495,"Therefore, how to choose this parameter is essential.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7496,"Given a new dataset, how to determine it for a good performance.[dataset-NEU, performance-NEU], [EMP-NEU]",dataset,performance,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7499,"Does it happen on this dataset only or it is the case for many datasets? [dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7508,"This is a very interesting area and exciting work.[work-POS], [EMP-POS]",work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7509,"The main idea behind the proposed test is very insightful. [main idea-POS], [EMP-POS]",main idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7510,"The main theoretical contribution stimulates and motivates much needed further research in the area.[theoretical contribution-POS], [IMP-POS]",theoretical contribution,,,,,,IMP,,,,,POS,,,,,,POS,,,, 7511,"In my opinion both contributions suffer from some significant limitations.[contributions-NEG, limitations-NEG], [IMP-NEG]",contributions,limitations,,,,,IMP,,,,,NEG,NEG,,,,,NEG,,,, 7512,"However, given how little we know about the behavior of modern generative models, it is a good step in the right direction.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7513,"1. The biggest issue with the proposed test is that it conflates mode collapse with non-uniformity. [proposed test-NEG], [EMP-NEG]",proposed test,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7514,"The authors do mention this issue, but do not put much effort into evaluating its implications in practice, or parsing Theorems 1 and 2.[issue-NEU], [SUB-NEG]",issue,,,,,,SUB,,,,,NEU,,,,,,NEG,,,, 7515,"My current understanding is that, in practice, when the birthday paradox test gives a collision I have no way of knowing whether it happened because my data distribution is modal, or because my generative model has bad diversity.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7516,"Anecdotally, real-life distributions are far from uniform, so this should be a common issue.[issue-NEU], [EMP-NEU]",issue,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7517,"I would still use the test as a part of a suite of measurements, but I would not solely rely on it.[test-NEU], [EMP-NEG]",test,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 7518,"I feel that the authors should give a more prominent disclaimer to potential users of the test.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7519,"2. Also, given how mode collapse is the main concern, it seems to me that a discussion on coverage is missing.[discussion-NEG], [SUB-NEG]",discussion,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7520,"The proposed test is a measure of diversity, not coverage, so it does not discriminate between a generator that produces all of its samples near some mode and another that draws samples from all modes of the true data distribution.[proposed test-NEU], [EMP-NEU]",proposed test,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7521,"As long as they yield collisions at the same rate, these two generative models are 'equally diverse'. Isn't coverage of equal importance?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7522,"3. The other main contribution of the paper is Theorem 3, which showsu2014via a very particular construction on the generator and encoderu2014that bidirectional GANs can also suffer from serious mode collapse. [main contribution-NEU, Theorem-NEU], [EMP-NEG]",main contribution,Theorem,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 7523,"I welcome and are grateful for any theory in the area.[theory-NEU], [EMP-NEU]",theory,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7524,"This theorem might very well capture the underlying behavior of bidirectional GANs, however, being constructive, it guarantees nothing in practice. [theorem-NEG], [EMP-NEG]",theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7525,"In light of this, the statement in the introduction that ""encoder-decoder training objectives cannot avoid mode collapse"" might need to be qualified.[statement-NEG], [EMP-NEG]",statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7526,"In particular, the current statement seems to obfuscate the understanding that training such an objective would typically not result into the construction of Theorem 3.[statement-NEG, Theorem-NEU], [EMP-NEG]",statement,Theorem,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 7529,"Prior techniques which can address some of these aspects do not necessarily work with deep learning, which is a key focus of the paper.[techniques-NEG, paper-NEG], [CMP-NEG]",techniques,paper,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 7533,"To deal with the large number of tasks, the authors further propose computing a few randomly sampled entries of the similarity matrix, and then using ideas from robust matrix completion to induce the full matrix.[ideas-NEU], [EMP-NEU]",ideas,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7535,"I think there are some interesting ideas in this paper, and the use of matrix completion techniques to deal with a large number of tasks is nice.[ideas-POS], [EMP-POS]",ideas,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7536,"But I believe there are important drawbacks in the framing and basic methodology and evaluation which make the paper unfit for publication in its current form.[drawbacks-NEG, basic methodology-NEG], [EMP-NEG]",drawbacks,basic methodology,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 7537,"1. The prior works which do task clustering and multitask learning typically focus on how one might induce clusters which work well with the multitask learning methods used (see e.g. Kang et al. which is cited, as well as Kshirsagar et al. in ECML 2017 as two examples).[methods-NEU], [CMP-NEU]",methods,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7538,"In this paper, on the other hand, the clusters are obtained in a manner which only accounts for pairwise similarities of tasks, using a pairwise similarity metric which is quite different from how the cluster is eventually used.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7539,"This seems quite suboptimal.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7540,"2. The pairwise similarity measure appears to be one that might have a high false negative rate.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7541,"That is, it might rate many tasks as dissimilar even when they are not.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7542,"This is because you train individual model on i and apply it to j.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7543,"It is possible that this model does not do well, but there is an equally good model for i which also does well on j.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7544,"Such a model would indeed be found if i and j are put in the same cluster, but the method would fail to do so, leading to high fragmentation.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7545,"3. I do not see how you apply the model from task i to task j when the two have different output spaces.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7546,"Since this is a major motivation of the paper, I actually do not see how the setup makes sense![paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7547,"4. It seems odd to put absolute errors on task j instead of regret to the model trained on j in the similarity matrix.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7548,"5. The inducing of edges in the Y matrix by comparing to a mean and standard deviation is completely baseless.[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 7549,"Without good reasoning from the authors, I see no reason why the entries in the row of a matrix should have a normal-like distribution.[reason-NEG], [EMP-NEG]",reason,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7550,"Furthermore, in the matrix completion scenario, you have O(log^2n) entries per row on average, which means with high probability few rows should have a constant number of entries.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7551,"In this case, the means are standard deviations do not even make sense to me.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7552,"At the very least, I would consider using regret to the model of the task, and compute some quantiles on that which is still suspect in the matrix completion setting.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7553,"6 .In the evaluation, why are just 12 tasks used in the Amazon dataset?[evaluation-NEU], [EMP-NEU]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7554,"Why don't you present evaluation results on all tasks in the multitask setting?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7555,"7. Why is average accuracy the right thing?[accuracy-NEU], [EMP-NEU]",accuracy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7556,"If the error rates are different for different tasks, it is not sensible to measure raw accuracies.[raw accuracies-NEU], [EMP-NEU]",raw accuracies,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7557,"The authors also seem to miss a potentially relevant baseline in Cross-Stitch Networks (https://arxiv.org/abs/1604.03539)[baseline-NEG], [CMP-NEG]",baseline,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7559,"I do not see why there's need for a proof for the matrix completion result. [proof-NEG, result-NEG], [EMP-NEG]",proof,result,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 7560,"This appears to be a direct application of Chandrasekaran et al, and in fact matrix completion has been used for clustering before (https://arxiv.org/abs/1104.4803).[application-NEG], [NOV-NEG]",application,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7561,"Given this, the presentation in the paper makes the idea look more novel than it is.[presentation-POS, paper-NEG], [NOV-NEG, PNF-POS]",presentation,paper,,,,,NOV,PNF,,,,POS,NEG,,,,,NEG,POS,,, 7562,"I also think that the authors might benefit from dropping the whole few-shot learning angle here, and instead do a more thorough job of evaluating their multitask learning method.[method-NEG], [EMP-NEG]]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7566,"but I have a few concerns which I've listed below: 1. Section 4, which describes the experiments of compressing server sized acoustic models for embedded recognition seems a bit ""disjoint"" from the rest of the paper.[Section-NEG, experiments-NEG], [EMP-NEG]",Section,experiments,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 7567,"I had a number of clarification questions spefically on this section: - Am I correct that the results in this section do not use the trace-norm regularization at all?[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7568,"It would strengthen the paper significantly if the experiments presented on WSJ in the first section were also conducted on the ""internal"" task with more data.[paper-NEU, experiments-NEU, data-NEU], [SUB-NEU, EMP-NEU]",paper,experiments,data,,,,SUB,EMP,,,,NEU,NEU,NEU,,,,NEU,NEU,,, 7569,"- How large are the training/test sets used in these experiments (for test sets, number of words, for training sets, amount of data in hours (is this ~10,000hrs), whether any data augmentation such as multi-style training was done, etc.)[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7570,"- What are the ""tier-1"" and ""tier-2"" models in this section?[models-NEU], [EMP-NEU]",models,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7571,"It would also aid readability if the various models were described more clearly in this section, with an emphasis on structure, output targets, what LMs are used, how are the LMs pruned for the embedded-size models, etc.[readability-NEU, section-NEG], [SUB-NEG, CLA-NEG]",readability,section,,,,,SUB,CLA,,,,NEU,NEG,,,,,NEG,NEG,,, 7572,"Also, particularly given that the focus is on embedded speech recognition, of which the acoustic model is one part, I would like a few more details on how decoding was done, etc.[details-NEU], [SUB-NEU]",details,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7573,"- The details in appendix B are interesting, and I think they should really be a part of the main paper.[details-POS], [PNF-NEU]",details,,,,,,PNF,,,,,POS,,,,,,NEU,,,, 7574,"That being said, the results in Section B.5, as the authors mention, are somewhat preliminary,[results-NEG, Section-NEU], [EMP-NEG]",results,Section,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 7575,"and I think the paper would be much stronger if the authors can re-run these experiments were models are trained to convergence.[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7576,"- The paper focuses fairly heavily on speech recognition tasks, and I wonder if it would be more suited to a conference on speech recognition.[paper-NEG], [APR-NEU]",paper,,,,,,APR,,,,,NEG,,,,,,NEU,,,, 7577,"2. Could the authors comment on the relative training time of the models with the trace-norm regularizer, L2-regularizer and the unconstrained model in terms of convergence time.[comment-NEU], [EMP-NEU]",comment,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7578,"3. Clarification question: For the WSJ experiments was the model decoded without an LM?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7579,"If no LM was used, then the choice of reporting results in terms of only CER is reasonable,[results-NEU], [EMP-POS]",results,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 7580,"but I think it would be good to also report WERs on the WSJ set in either case.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7581,"4. Could the authors indicate the range of values of lambda_{rec} and lambda_{nonrec} that were examined in the work?[work-NEU], [SUB-NEU]",work,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7582,"Also, on a related note, in Figure 2, does each point correspond to a specific choice of these regularization parameters?[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7583,"5. Figure 4: For the models in Figure 4, it would be useful to indicate the starting CER of the stage-1 model before stage-2 training to get a sense of how stage-2 training impacts performance.[models-NEU, Figure-NEU, performance-NEU], [SUB-NEU]",models,Figure,performance,,,,SUB,,,,,NEU,NEU,NEU,,,,NEU,,,, 7584,"6. Although the results on the WSJ set are interesting,[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7585,"I would be curious if the same trends and conclusions can be drawn from a larger dataset -- e.g., the internal dataset that results are reported on later in the paper, or on a set like Switchboard.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7586,"I think these experiments would strengthen the paper.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7587,"7. The experiments in Section 3.2.3 were interesting, since they demonstrate that the model can be warm-started from a model that hasn't fully converged.[experiments-POS, Section-NEU, model-POS], [EMP-POS]",experiments,Section,model,,,,EMP,,,,,POS,NEU,POS,,,,POS,,,, 7588,"Could the authors also indicate the CER of the model used for initialization in addition to the final CER after stage-2 training in Figure 5.[model-NEU, Figure-NEU], [SUB-NEU]",model,Figure,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 7590,"describes a technique for training with quantized forward passes which results in models that have smaller performance degradation relative to quantization after training.[models-NEU], [SUB-NEU, EMP-NEG]",models,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEG,,, 7593,"Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin, ""On the efficient representation and execution of deep acoustic models,"" Proc. of Interspeech, pp. 2746 -- 2750, 2016. 9. Minor comment: The authors use the term ""warmstarting"" to refer to the process of training NNs by initializing from a previous model.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 7594,"It would be good to clarify this in the text.[text-NEU], [CLA-NEU]]",text,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 7597,"- provides fairly extensive experimental comparison of their method and 3 others (Reluplex, Planet, MIP) on 2 existing benchmarks and a new synthetic one.[experimental comparison-NEU], [CMP-NEU]",experimental comparison,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7598,"Relevance: Although there isn't any learning going on, the paper is relevant to the conference.[paper-NEU], [APR-POS]",paper,,,,,,APR,,,,,NEU,,,,,,POS,,,, 7599,"Clarity: Writing is excellent, the content is well presented and the paper is enjoyable read.[Writing-POS], [CLA-POS]",Writing,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7600,"Soundness: As far as I can tell, the work is sound.[work-POS], [EMP-POS]",work,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7601,"Novelty: This is in my opinion the weakest point of the paper.[Novelty-NEG], [NOV-NEG]",Novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7602,"There isn't really much novelty in the work.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7603,"The branch&bound method is fairly standard, two benchmarks were already existing and the third one is synthetic with weights that are not even trained (so not clear how relevant it is).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7604,"The main novel result is the experimental comparison, which does indeed show some surprising results (like the fact that BaB works so well).[result-NEU, experimental comparison-POS], [NOV-POS, CMP-POS]",result,experimental comparison,,,,,NOV,CMP,,,,NEU,POS,,,,,POS,POS,,, 7605,"Significance: There is some value in the experimental results, and it's great to see you were able to find bugs in existing methods. [experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7606,"Unfortunately, there isn't much insight to be gained from them.[insight-NEG], [IMP-NEG]",insight,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 7607,"I couldn't see any emerging trend/useful recommendations (like if your problem looks like X, then use algorithm B).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7613,"While the argument makes sense, it is not clear to me why one cannot simply index the original text.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7614,"The additional encode/decode mechanism seems to introduce unnecessary noise.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7615,"The framework does include several components and techniques from latest recent work, which look pretty sophisticated.[recent work-NEU], [CMP-NEU]",recent work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7616,"However, as the dataset is generated by simulation, with a very small set of vocabulary, the value of the proposed framework in practice remains largely unproven.[dataset-NEU, proposed framework-NEU], [SUB-NEU, EMP-NEU]",dataset,proposed framework,,,,,SUB,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 7617,"Pros: 1. An interesting framework for bAbI QA by encoding sentence to n-grams[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7618,"Cons: 1. The overall justification is somewhat unclear[justification-NEU], [CLA-NEG]",justification,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 7619,"2. The approach could be over-engineered for a special, lengthy version of bAbI and it lacks evaluation using real-world data [approach-NEU, evaluation-NEG], [EMP-NEU]",approach,evaluation,,,,,EMP,,,,,NEU,NEG,,,,,NEU,,,, 7628,"All of this is somewhat straightforward; a penalty is paid by representing numbers via fixed point arithmetic, which is used to deal with ReLU mostly.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7629,"This is somewhat odd: it is not clear why, e.g., garbled circuits where not used for something like this, as it would have been considerably faster than FHE.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7630,"There is also a work in this area that the authors do not cite or contrast to, bringing the novelty into question; please see the following papers and references therein:;[references-NEG], [CMP-NEG, SUB-NEG]",references,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 7635,"The first paper is the most related, also using homomorphic encryption, and seems to cover a superset of the functionalities presented here (more activation functions, a more extensive analysis, and faster decryption times).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7636,"The second paper uses arithmetic circuits rather than HE, but actually implements training an entire neural network securely.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7637,"Minor details: The problem scenario states that the model/weights is private, but later on it ceases to be so (weights are not encrypted).[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7639,"In theory, this system alone could be used to compute anything securely. This is informal and incorrect.[theory-NEG], [EMP-NEG]",theory,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7641,"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases. It is unclear what operations are referred to here.[operations-NEG], [EMP-NEG]",operations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7643,"-----UPDATE------ Having read the responses from the authors, and the other reviews, I am happy with my rating and maintain that this paper should be accepted.[rating-POS, paper-POS], [REC-POS]",rating,paper,,,,,REC,,,,,POS,POS,,,,,POS,,,, 7646,". I enjoyed reading this paper.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7647,"It is a very interesting set up, and a novel idea.[set up-POS, idea-POS], [NOV-POS]",set up,idea,,,,,NOV,,,,,POS,POS,,,,,POS,,,, 7648,"A few comments: The paper is easy to read, and largely written well[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7649,". The article is missing from the nouns quite often though so this is something that should be amended. There are a few spelling slip ups (to a certain extend --> to a certain extent, as will see --> as we will see)][null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 7650,"It appears that the output for kennen-o is a discrete probability vector for each attribute, where each entry corresponds to a possibility (for example, for batch-size it is a length 3 vector where the first entry corresponds to 64, the second 128, and the third 256)[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 7651,". What happens if you instead treat it as a regression task, would it then be able to hint at intermediates (a batch size of 96) or extremes (say, 512).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7652,"n A flaw of this paper is that kennen-i and io appear to require gradients from the network being probed (you do mention this in passing), which realistically you would never have access to. (Please do correct me if I have misunderstood this)[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7653,"It would be helpful if Section 4 had a paragraph as to your thoughts regarding why certain attributes are easier/harder to predict[Section-NEU], [EMP-NEU, PNF-NEU]",Section,,,,,,EMP,PNF,,,,NEU,,,,,,NEU,NEU,,, 7654,". Also, the caption for Table 2 could contain more information regarding the network outputs.[Table-NEU], [PNF-NEU]",Table,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 7655,"You have jumped from predicting 12 attributes on MNIST to 1 attribute on Imagenet[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7656,". It could be beneficial to do an intermediate experiment (a handful of attributes on a middling task)[experiment-NEU], [SUB-NEU]",experiment,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7657,". I think this paper should be accepted as it is interesting and novel[paper-POS], [NOV-POS, REC-POS]",paper,,,,,,NOV,REC,,,,POS,,,,,,POS,POS,,, 7660,"- Fairly good experimental results[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7661,"Cons ------ - kennen-i seems like it couldn't be realistically deployed - lack of an intermediate difficulty task [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7666,"The paper is very well motivated and tackles an important problem.[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7667,"However, the presentation of the method is not clear, the experiment is not sufficient, and the paper is not polished.[presentation-NEG, method-NEU, experiment-NEG], [CLA-NEG, EMP-NEG, PNF-NEG]",presentation,method,experiment,,,,CLA,EMP,PNF,,,NEG,NEU,NEG,,,,NEG,NEG,NEG,, 7668,"Pros: 1. This paper tackles an important research question. [question-POS], [EMP-POS]",question,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7669,"Learning a meaningful representation is needed in general.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7670,"For the application of images, using text description to refine the representation is a natural and important research question.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7671,"2. The proposed idea is very well motivated, and the proposed model seems correct.[proposed idea-POS, proposed model-POS], [EMP-POS]",proposed idea,proposed model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7672,"Cons and questions: 1. The presentation of the model is not clear.[presentation-NEG, model-NEU], [PNF-NEG]",presentation,model,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 7673,"Figure 2 which is the graphic representation of the model is hard to read.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7674,"There is no meaningful caption for this important figure.[figure-NEG], [PNF-NEG]",figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7675,"Which notation in the figure corresponds to which variable is not clear at all.[notation-NEG, figure-NEU], [PNF-NEG]",notation,figure,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 7676,"This also leads to unclarity of the text presentation of the model, for example, section 3.2. Which latent variable is used to decode which part?[text presentation-NEG], [PNF-NEG]",text presentation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7677,"2. Missing important related works.[related works-NEG], [SUB-NEG, CMP-NEG]",related works,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 7679,"The paper did not discuss these related work and did not compare the performances.[related work-NEG], [SUB-NEG, CMP-NEG]",related work,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 7687,"3. Experiment evaluation is not sufficient. [Experiment evaluation-NEG], [SUB-NEG]",Experiment evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7688,"Firstly, only one toy dataset is used for experimental evaluations.[experimental evaluations-NEU], [EMP-NEU]",experimental evaluations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7689,"More evaluations are needed to verify the method, especially with natural images.[evaluations-NEU], [SUB-NEU]",evaluations,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7690,"Secondly, there are no other state-of-the-art baselines are used.[baselines-NEU], [SUB-NEG, CMP-NEG]",baselines,,,,,,SUB,CMP,,,,NEU,,,,,,NEG,NEG,,, 7691,"The baselines are various simiplied versions of the proposed model.[baselines-NEU], [CMP-NEU]",baselines,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7695,"In the paper, only attributes of objects are used which is not semi-natural languages.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7697,"There are missing links and references in the paper and un-explained notations, and non-informative captions.[references-NEG, notations-NEG], [SUB-NEG, PNF-NEG]",references,notations,,,,,SUB,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 7699,"This method does not model spatial information.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7700,"How can the method make sure that simple adding generated images with each component will lead to a meaningful image in the end?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7701,"Especially with natural images, the spacial location and the scale should be critical. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7715,"The paper is well written and provides excellent insights. [paper-POS, insights-POS], [CLA-POS, EMP-POS]",paper,insights,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 7716,"Pros: 1. Very well written paper with good theoretical and experimental analysis.[experimental analysis-POS], [CLA-POS]",experimental analysis,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7717,"2. It provides useful insights of model behaviors which are attractive to a large group of people in the community.[insights-POS], [IMP-POS]",insights,,,,,,IMP,,,,,POS,,,,,,POS,,,, 7718,"3. The result of optimal batch size setting is useful to wide range of learning methods.[result-POS], [EMP-POS, IMP-POS]",result,,,,,,EMP,IMP,,,,POS,,,,,,POS,POS,,, 7719,"Cons and mainly questions: 1. Missing related work.[related work-NEG], [SUB-NEG, CMP-NEG]",related work,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 7720,"One important contribution of the paper is about optimal batch sizes, but related work in this direction is not discussed..[contribution-NEU, related work-NEG], [SUB-NEG, CMP-NEG]",contribution,related work,,,,,SUB,CMP,,,,NEU,NEG,,,,,NEG,NEG,,, 7724,"which also discuss the generalization ability of the model.[discussions-NEU, analysis-NEU], [SUB-NEU]",discussions,analysis,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 7727,"4. The results are reported mostly concerning the training iterations, not the CPU time such as in figure 3.[results-POS], [EMP-NEG]",results,,,,,,EMP,,,,,POS,,,,,,NEG,,,, 7728,"It will be fair/interesting to see the result for CPU time where small batch maybe favored more.[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7736,"u2014u2014u2014u2014u2014- Update: I lowered my rating considering other ppl s review and comments. [rating-NEG], [REC-NEG]",rating,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 7743,"The idea is nice and simple,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7744,"however the current framework has several weaknesses: 1. The whole pipeline has three (neural network) components: a) input image features are extracted from VGG net pre-trained on auxiliary data;[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7745,"2) auto-encoder that is trained on data for one-shot learning;[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7747,"This three networks need to be clearly described; ideally combined into one end-to-end training pipeline.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7748,"2. The empirical performance is very poor.[empirical performance-NEG], [EMP-NEG]",empirical performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7749,"If you look into literature for zero shot learning, work by Z. Akata in CVPR 2015, CVPR2016, the performance on AwA and on CUB-bird goes way above 50%, where in the current paper it is 30.57% and 8.21% at most (for the most recent survey on zero shot learning papers using attribute embeddings, please, refer to Zero-Shot Learning - The Good, the Bad and the Ugly by Xian et al, CVPR 2017).[performance-NEG], [CMP-NEG]",performance,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7750,"It is important to understand, why there is such a big drop in performance in one-shot learning comparing to zero-shot learning?[performance-NEG], [EMP-NEU]",performance,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 7753,"I am not sure, how can the auto-encoder model not overfit completely to the training data instances.[training data-NEU], [EMP-NEG]",training data,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 7754,"Perhaps, one could try to explore the zero-shot learning setting, where there is a split between train and test classes: training the autoencoder model using large training dataset, and adapting the weights using single data points from test classes in one-shot learning setting.[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7755,"Overall, I like the idea, so I am leaning towards accepting the paper,[idea-POS], [REC-POS]",idea,,,,,,REC,,,,,POS,,,,,,POS,,,, 7756,"but the empirical evaluations are not convincing. [empirical evaluations-NEG], [EMP-NEG]",empirical evaluations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7763,"Strengths: Simultaneous text and image generation is an interesting research topic that is relevant for the community.[null], [EMP-POS, IMP-POS]",null,,,,,,EMP,IMP,,,,,,,,,,POS,POS,,, 7764,"The paper is well written, the model is formulated with no errors (although it could use some more detail) and supported by illustrations (although there are some issues with the illustrations detailed below).[paper-POS, model-POS], [CLA-POS, EMP-POS]",paper,model,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 7765,"The model is evaluated on tasks that it was not trained on which indicate that this model learns generalizable latent representations.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7766,"Weaknesses: The paper gives the impression to be rushed, i.e. there are citations missing (page 3 and 6), the encoder model illustration is not as clear as it could be.[citations-NEG], [CMP-NEG, SUB-NEG]",citations,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 7767,"Especially the white boxes have no labels, the experiments are conducted only on one small-scale proof of concept dataset, several relevant references are missing, e.g. GAN, DCGAN, GAWWN, StackGAN.[experiments-NEG], [SUB-NEG, EMP-NEG]",experiments,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 7768,"Visual Question answering is mentioned several times in the paper, however no evaluations are done in this task.[evaluations-NEG], [SUB-NEG]",evaluations,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7769,"Figure 2 is complex and confusing due to the lack of proper explanation in the text.[Figure-NEG, text-NEG], [PNF-NEG]",Figure,text,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 7770,"The reader has to find out the connections between the textual description of the model and the figure themselves due to no reference to particular aspects of the figure at all.[reference-NEG], [EMP-NEG]",reference,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7771,"In addition the notation of the modules in the figure is almost completely disjoint so that it is initially unclear which terms are used interchangeably.[notation-NEG], [PNF-NEG]",notation,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7772,"Details of the ""white components"" in Figure 2 are not mentioned at all.[Details-NEG, Figure-NEG], [SUB-NEG]",Details,Figure,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 7773,"E.g., what is the purpose of the fully connected layers, why do the CNNs split and what is the difference in the two blocks (i.e. what is the reason for the addition small CNN block in one of the two)[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7774,"The optimization procedure is unclear.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7775,"What is the exact loss for each step in the recurrence of the outputs (according to Figure 5)?[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7776,"Or is only the final image and description optimized.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7777,"If so, how is the partial language description as a target handled since the description for a different entity in an image might be valid, but not the current target.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7779,"An analysis or explanation of the following would be desirable: How is the network trained on single descriptions able to generate multiple descriptions during evaluation.[analysis-NEU], [EMP-NEU]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7780,"How does thresholding mentioned in Figure 5 work?[Figure-NEU], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7782,"In Figure 5, k seems to be larger than the number of entities.[Figure-NEU], [CLA-NEU]",Figure,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 7783,"How is k chosen? Is it fixed or dynamic?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7784,"Even though the title claims that the model disentangles the latent space on an entity-level, it is not mentioned in the paper.[title-NEG], [EMP-NEG]",title,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7785,"Intuitively from Figure 5, the network generates black images (i.e. all values close to zero) whenever the attention is on no entity and, hence, when attention is on an entity the latent space represents only this entity and the image is generated only showing that particular entity.[Figure-NEG], [EMP-NEU]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 7786,"However, confirmation of this intuition is needed since this is a central claim of the paper.[intuition-NEU], [EMP-NEU]",intuition,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7787,"As the main idea and the proposed model is simple and intuitive, the evaluation is quite important for this paper to be convincing.[proposed model-POS, evaluation-NEU], [EMP-POS]",proposed model,evaluation,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 7788,"Shapeworlds dataset seems to be an interesting proof-of-concept dataset[dataset-POS], [EMP-POS]",dataset,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7789,"however it suffers from the following weaknesses that prevent the experiments from being convincing especially as they are not supported with more realistic setups.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7790,"First, the visual data is composed of primitive shapes and colors in a black background.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7792,"Third, it is not used widely in the literature, therefore no benchmarks exist on this data.[benchmarks-NEG], [EMP-NEG]",benchmarks,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7793,"It is not easy to read the figures in the experimental section, no walkthrough of the results are provided.[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7794,"For instance in Figure 4a, the task is described as ""showing the changes in the attribute latent variables"" which gives the impression that, e.g. for the first row the interpolation would be between a purple triangle to a purple rectangle however in the middle the intermediate shapes also are painted with a different color. It is not clear why the color in the middle changes.[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7795,"The evaluation criteria reported on Table 1 is not clear.[evaluation-NEG, Table-NEG], [EMP-NEG]",evaluation,Table,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 7796,"How is the accuracy measured, e.g. with respect to the number of objects mentioned in the sentence, the accuracy of the attribute values, the deviation from the ground truth sentence (if so, what is the evaluation metric)? [accuracy-NEU], [EMP-NEU]",accuracy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7797,"No example sentences are provided for a qualitative comparisons.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7798,"In fact, it is not clear if the model generates full sentences or attribute phrases.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7799,"As a summary, this paper would benefit significantly with a more extensive overview of the existing relevant models, clarification on the model details mentioned above and a more through experimental evaluation with more datasets and clear explanation of the findings.[model-NEU, experimental evaluation-NEU, findings-NEU], [CMP-NEU, IMP-NEU, SUB-NEU]",model,experimental evaluation,findings,,,,CMP,IMP,SUB,,,NEU,NEU,NEU,,,,NEU,NEU,NEU,, 7804,"The paper benefits from such a relationship and derives an actor-critic algorithm.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7805,"Specifically, the paper only parametrizes the Q function, and computes the policy gradient using the relation between the policy and Q function (Appendix A.1).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7806,"Through a set of experiments, the paper shows the effectiveness of the method.[experiments-POS, method-POS], [EMP-POS]",experiments,method,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7807,"EVALUATION: I think exploring and understanding entropy-regularized RL algorithm is important.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7808,"It is also important to be able to benefit from off-policy data.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7809,"I also find the empirical results encouraging.[empirical results-POS], [EMP-POS]",empirical results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7810,"But I have some concerns about this paper: - The derivations of the paper are unclear.[derivations-NEG], [EMP-NEG]",derivations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7811,"- The relation with other recent work in entropy-regularized RL should be expanded.[recent work-NEG], [CMP-NEG]",recent work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7813,"- The algorithm that performs well is not the one that was actually derived.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7814,"* Unclear derivations: The derivations of Appendix A.1 is unclear.[derivations-NEG], [EMP-NEG]",derivations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7815,"It makes it difficult to verify the derivations.[derivations-NEG], [EMP-NEG]",derivations,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7816,"To begin with, what is the loss function of which (9) and (10) are its gradients?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7817,"To be more specific, the choices of hat{Q} in (15) and hat{V} in (19) are not clear.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7819,"But if it is the case, shouldn't we have a gradient of Q in (15) too?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7820,"(or show that it can be ignored?) It appears that hat{Q} and hat{V} are parameterized independently from Q (which is a function of theta).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7821,"Later in the paper they are estimated using a target network, but this is not specified in the derivations.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7822,"The main problem boils down to the fact that the paper does not start from a loss function and compute all the gradients in a systematic way.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7823,"Instead it starts from gradient terms, each of which seems to be from different papers, and then simplifies them.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7826,"In that paper we have Q_pi instead of hat{Q} though.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7827,"I suggest that the authors start from a loss function and clearly derive all necessary steps.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7828,"* Unclear relation with other papers: What part of the derivations of this work are novel?[work-NEU], [NOV-NEU, CMP-NEU]",work,,,,,,NOV,CMP,,,,NEU,,,,,,NEU,NEU,,, 7829,"Currently the novelty is not obvious.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 7831,"(very similar formulation is developed in Appendix B of https://arxiv.org/abs/1702.08165).[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 7833,"(in the form of a Bellman residual minimization algorithm, as opposed to this work which essentially uses a Fitted Q-Iteration algorithm as the critic).[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7834,"I think the paper could do a better job differentiating from those other papers.[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7835,"* The claim that this paper is about learning from demonstration is a bit questionable.[claim-NEG], [EMP-NEG]",claim,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7836,"The paper essentially introduces a method to use off-policy data, which is of course important,[method-POS], [EMP-POS]",method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7837,"but does not cover the important scenario where we only have access to (state,action) pairs given by an expert.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7838,"Here it appears from the description of Algorithm 1 that the transitions in the demonstration data have the same semantic as the interaction data, i.e., (s,a,r,s').[description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7840,"* The paper mentions that to formalize the method as a policy gradient one, importance sampling should be used (the paragraph after (12)), but the performance of such a formulation is bad, as depicted in Figure 2.[performance-NEG, Figure-NEU], [EMP-NEG]",performance,Figure,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 7841,"As a result, Algorithm 1 does not use importance sampling.[Algorithm-NEU], [EMP-NEG]",Algorithm,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 7842,"This basically suggests that by ignoring the fact that the data is collected off-policy, and treating it as an on-policy data, the agent might perform better.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7843,"This is an interesting phenomenon and deservers further study, as currently doing the ""wrong"" things is better than doing the ""right"" thing.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 7844,"I think a good paper should investigate this fact more.[paper-NEU], [SUB-NEU]",paper,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7846,"Quality The theoretical results presented in the paper appear to be correct.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7847,"However, the experimental evaluation is globally limited, hyperparameter tuning on test which is not fair.[experimental evaluated-NEG], [EMP-NEG]",experimental evaluated,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7848,"Clarity The paper is mostly clear,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7849,"even though some parts deserve more discussion/clarification (algorithm, experimental evaluation).[parts-NEG], [SUB-NEG]",parts,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7850,"Originality The theoretical results are original, and the SGD approach is a priori original as well.[results-POS], [NOV-POS]",results,,,,,,NOV,,,,,POS,,,,,,POS,,,, 7851,"Significance The relaxed dual formulation and OT/Monge maps convergence results are interesting and can of of interest for researchers in the area,[results-POS], [IMP-POS]",results,,,,,,IMP,,,,,POS,,,,,,POS,,,, 7852,"the other aspects of the paper are limited.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7853,"Pros: -Theoretical results on the convergence of OT/Monge maps -Regularized formulation compatible with SGD[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7854,"Cons -Experimental evaluation limited[evaluation-NEG], [SUB-NEG]",evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7855,"-The large scale aspect lacks of thorough analysis[aspect-NEG], [SUB-NEG]",aspect,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7856,"-The paper presents 2 contributions but at then end of the day, the development of each of them appears limited[contributions-NEG], [SUB-NEG]",contributions,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7857,"Comments: -The weak convergence results are interesting.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7858,"However, the fact that no convergence rate is given makes the result weak.[result-NEG], [SUB-NEG]",result,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7859,"In particular, it is possible that the number of examples needed for achieving a given approximation is at least exponential.[examples-NEU], [SUB-NEU]",examples,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7860,"This can be coherent with the problem of Domain Adaptation that can be NP-hard even under the co-variate shift assumption (Ben-David&Urner, ALT2012).[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7861,"Then, I think that the claim of page 6 saying that Domain Adaptation can be performed early optimally has then to be rephrased.[page-NEG], [PNF-NEG]",page,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7862,"I think that results show that the approach is theoretically justified but optimality is not here yet.[results-NEG, approach-NEG], [EMP-NEG]",results,approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 7863,"Theorem 1 is only valid for entropy-based regularizations, what is the difficulty for having a similar result with L2 regularization?[Theorem-NEU], [EMP-NEU]",Theorem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7864,"-The experimental evaluation on the running time is limited to one particular problem.[experimental evaluation-NEG], [SUB-NEG]",experimental evaluation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7865,"If this subject is important, it would have been interesting to compare the approaches on other large scale problems and possibly with other implementations.[approaches-NEG], [CMP-NEG]",approaches,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7866,"It is also surprising that the efficiency the L2-regularized version is not evaluated.[efficiency-NEG], [SUB-NEG]",efficiency,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7867,"For a paper interesting in large scale aspects, the experimental evaluation is rather weak.[experimental evaluation-NEG], [EMP-NEG]",experimental evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7868,"The 2 methods compared in Fig 2 reach the same objective values at convergence, but is there any particular difference in the solutions found?[methods-NEU, Fig-NEU], [EMP-NEU]",methods,Fig,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7869,"-Algorithm 1 is presented without any discussion about complexity, rate of convergence.[Algorithm-NEG], [EMP-NEG]",Algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7870,"Could the authors discuss this aspect?[aspect-NEU], [EMP-NEU]",aspect,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7871,"The presentation of this algo is a bit short and could deserve more space (in the supplementary)[algo-NEG], [SUB-NEG]",algo,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7872,"-For the DA application, the considered datasets are classic but not really large scale, anyway this is a minor remark.[datasets-NEG], [SUB-NEG]",datasets,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7873,"The setup is not completely clear,[setup-NEG], [CLA-NEG]",setup,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 7874,"since the approach is interesting for out of sample data,[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7875,"so I would expect the map to be computed on a small sample of source data, and then all source instances to be projected on target with the learned map.[data-NEU], [EMP-NEU]",data,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7876,"This point is not very clear and we do not know how many source instances are used to compute the mapping - the mapping is incomplete on this point[point-NEG], [SUB-NEG, CLA-NEG]",point,,,,,,SUB,CLA,,,,NEG,,,,,,NEG,NEG,,, 7877,"while this is an interesting aspect of the paper: this justifies even more the large scale aspect is the algo need less examples during learning to perform similar or even better classification.[aspect-POS], [EMP-POS]",aspect,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7878,"Hyperparameter tuning is another aspect that is not sufficiently precise in the experimental setup: it seems that the parameters are tuned on test (for all methods), which is not fair since target label information will not be available from a practical standpoint.[experimental setup-NEG], [EMP-NEG]",experimental setup,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7880,"Experiments on generative optimal transport are interesting and probably generate more discussion/perspectives [Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7881,". -- After rebuttal -- Authors have answered to many of my comments, I think this is an interesting paper, I increase my score.[paper-POS, score-POS], [REC-POS]]",paper,score,,,,,REC,,,,,POS,POS,,,,,POS,,,, 7883,"While I can now clearly see the contributions of the paper, the minimal revisions in the paper do not make the contributions clear yet (in my opinion that should already be clear after having read the introduction).[contributions-NEU, revisions-NEG], [EMP-NEG, IMP-NEG]",contributions,revisions,,,,,EMP,IMP,,,,NEU,NEG,,,,,NEG,NEG,,, 7884,"The new section intuitive analysis is very nice.[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7885,"******************************* My problem with this paper that all the theoretical contributions / the new approach refer to 2 arXiv papers, what's then left is an application of that approach to learning form imperfect demonstrations.[theoretical contributions-NEU], [IMP-NEG, EMP-NEG]",theoretical contributions,,,,,,IMP,EMP,,,,NEU,,,,,,NEG,NEG,,, 7886,"Quality The approach seems sound[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7887,"but the paper does not provide many details on the underlying approach.[details-NEG, approach-NEU], [SUB-NEG]",details,approach,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 7888,"The application to learning from (partially adversarial) demonstrations is a cool idea but effectively is a very straightforward application based on the insight that the approach can handle truly off-policy samples.[idea-POS, approach-NEU], [EMP-POS]",idea,approach,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 7889,"The experiments are OK [experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7890,"but I would have liked a more thorough analysis.[analysis-NEU], [SUB-NEU, EMP-NEU]",analysis,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 7891,"Clarity The paper reads well,[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7892,"but it is not really clear what the claimed contribution is.[contribution-NEG], [CLA-NEG, IMP-NEU]",contribution,,,,,,CLA,IMP,,,,NEG,,,,,,NEG,NEU,,, 7893,"Originality The application seems original.[application-POS], [NOV-POS]",application,,,,,,NOV,,,,,POS,,,,,,POS,,,, 7894,"Significance Having an RL approach that can benefit from truly off-policy samples is highly relevant.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 7895,"Pros and Cons + good results[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7896,"+ interesting idea of using the algorithm for RLfD[idea-POS, algorithm-NEU], [EMP-POS]",idea,algorithm,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 7897,"- weak experiments for an application paper[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7898,"- not clear what's new[null], [NOV-NEG, CLA-NEG]",null,,,,,,NOV,CLA,,,,,,,,,,NEG,NEG,,, 7902,"Experiments on TREC-QA and SNLI show modest improvement over the word-based structured attention baseline (Parikh et al., 2016).[Experiments-POS], [EMP-POS]",Experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7904,"Weaknesses: The paper is 8.5 pages long[paper-NEG], [PNF-NEG]",paper,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7905,". The method did not out-perform other very related structured attention methods (86.8, Kim et al., 2017, 86.9, Liu and Lapata, 2017)[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7906,"Aside from the time complexity from the inside-outside algorithm (as mentioned by the authors in conclusion), the comparison among all pairs of spans is O(n^4), which is more expensive.[comparison-NEG], [CMP-NEG]",comparison,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7908,"It would be nice to show, quantitatively, the agreement between the latent trees and gold/supervised syntax.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7909,"The paper claimed ""the model is able to recover tree structures that very closely mimic syntax"", but it's hard to draw this conclusion from the two examples in Figure 2[Figure-NEG], [EMP-NEG, SUB-NEG]",Figure,,,,,,EMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 7915,"The inducing point approximation used here is very efficient since all GP functions depend on a scalar input (as any activation function!) and therefore by just placing the inducing points in a dense grid gives a fast and accurate representation/compression of all GPs in terms of the inducing function values (denoted by U in the paper).[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7916,"Of course then inference involves approximating the finite posterior over inducing function values U and the paper make use of the standard Gaussian approximations.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7917,"In general I like the idea and I believe that it can lead to a very useful model.[model-POS], [IMP-POS, EMP-POS]",model,,,,,,IMP,EMP,,,,POS,,,,,,POS,POS,,, 7918,"However, I have found the current paper quite preliminary and incomplete.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7919,"The authors need to address the following: First (very important): You need to show experimentally how your method compares against regular neural nets (with specific fixed forms for their activation functions such relus etc).[method-NEU], [SUB-NEU, CMP-NEU]",method,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 7921,"In those experiments, our model shows to be significantly less prone to overfitting than a traditional feed-forward network of same size, despite having more parameters. > Well all this needs to be included in the same paper. [experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 7922,"Secondly: Discuss the connection with Deep GPs (Damianou and Lawrence 2013).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 7923,"Your method seems to be connected with Deep GPs although there appear to be important differences as well. E.g. you place GPs on the scalar activation functions in an otherwise heavily parametrized neural network (having interconnection weights between layers) while deep GPs model the full hidden layer mapping as a single GP (which does not require interconnection weights).[method-NEU], [CMP-NEU]",method,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 7924,"Thirdly: You need to better explain the propagation of uncertainly in section 3.2.2 and the central limit of distribution in section 3.2.1.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7925,"This is the technical part of your paper which is a non-standard approximation.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7926,"I will suggest to give a better intuition of the whole idea and move a lot of mathematical details to the appendix. [intuition-NEU, appendix-NEU], [EMP-NEU, SUB-NEU, PNF-NEU]",intuition,appendix,,,,,EMP,SUB,PNF,,,NEU,NEU,,,,,NEU,NEU,NEU,, 7930,"The strength of this paper is that it both gives a more systematic framework for and builds on existing ideas (character-based models, using dictionary definitions) to implement them as part of a model trained on the end task.[framework-POS, model-POS], [EMP-POS]",framework,model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 7931,"The contribution is clear[contribution-POS], [CLA-POS]",contribution,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7933,"In general, for the scope of the paper, it seems like what is here could fairly easily have been made into a short paper for other conferences that have that category.[paper-POS], [EMP-NEU]",paper,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 7934,"The basic method easily fits within 3 pages, and while the presentation of the experiments would need to be much briefer, this seems quite possible.[basic method-NEG, experiments-NEG, presentation-NEG], [EMP-NEG]",basic method,experiments,presentation,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 7935,"More things could have been considered.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7936,"Some appear in the paper, and there are some fairly natural other ones such as mining some use contexts of a word (such as just from Google snippets) rather than only using textual definitions from wordnet.[paper-NEG], [EMP-NEU]",paper,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 7937,"The contributions are showing that existing work using character-level models and definitions can be improved by optimizing representation learning in the context of the final task, and the idea of adding a learned linear transformation matrix inside the mean pooling model (p.3).[contributions-NEU, idea-NEU], [CMP-NEU]",contributions,idea,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 7938,"However, it is not made very clear why this matrix is needed or what the qualitative effect of its addition is.[effect-NEG], [EMP-NEG]",effect,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7939,"The paper is clearly written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 7941,"While it in no way covers the same ground as this paper it is relevant as follows: This paper assumes a baseline that is also described in that paper of using a fixed vocab and mapping other words to UNK.[baseline-NEU, paper-POS], [EMP-POS]",baseline,paper,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 7942,"However, they point out that at least for matching tasks like QA and NLI that one can do better by assigning random vectors on the fly to unknown words.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7943,"That method could also be considered as a possible approach to compare against here.[method-NEU, approach-NEU], [EMP-NEU]",method,approach,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 7944,"Other comments: - The paper suggests a couple of times including at the end of the 2nd Intro paragraph that you can't really expect spelling models to perform well in representing the semantics of arbitrary words (which are not morphological derivations, etc.).[paper-NEU, models-NEU], [EMP-POS]",paper,models,,,,,EMP,,,,,NEU,NEU,,,,,POS,,,, 7945,"While this argument has intuitive appeal, it seems to fly in the face of the fact that actually spelling models, including in this paper, seem to do surprisingly well at learning such arbitrary semantics.[paper-NEU, models-POS], [EMP-POS]",paper,models,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 7946,"- p.2: You use pretrained GloVe vectors that you do not update.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 7947,"My impression is that people have had mixed results, sometimes better, sometimes worse with updating pretrained vectors or not. Did you try it both ways? - fn. 1: Perhaps slightly exaggerates the point being made, since people usually also get good results with the GloVe or word2vec model trained on only 6 billion words u2013 2 orders of magnitude less data.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7948,"- p.4. When no definition is available, is making e_d(w) a zero vector worse than or about the same as using a trained UNK vector?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 7949,"- Table 1: The baseline seems reasonable (near enough to the quality of the original Salesforce model from 2016 (66 F1)[Table-POS, baseline-POS], [CMP-POS]",Table,baseline,,,,,CMP,,,,,POS,POS,,,,,POS,,,, 7950,"but well below current best single models of around 76-78 F1.[models-NEG], [CMP-NEG]",models,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 7951,"The difference between D1 and D3 does well illustrate that better definition learning is done with backprop from end objective.[objective-POS], [CMP-POS]",objective,,,,,,CMP,,,,,POS,,,,,,POS,,,, 7952,"This model shows the rather strong performance of spelling models u2013 at least on this task u2013 which he again benefit from training in the context of the end objective.[models-POS, objective-NEU], [EMP-NEU]",models,objective,,,,,EMP,,,,,POS,NEU,,,,,NEU,,,, 7953,"- Fig 2: It's weird that only the +dict (left) model learns to connect In and where.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7954,"The point made in the text between Where and overseas is perfectly reasonable, but it is a mystery why the base model on the right doesn't learn to associate the common words where and in both commonly expressing a location.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7955,"- Table 2: These results are interestingly different.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7956,"Dict is much more useful than spelling here.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7957,"I guess that is because of the nature of NLI, but it isn't 100% clear why NLI benefits so much more than QA from definitional knowledge.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7958,"- p.7: I was slightly surprised by how small vocabs (3k and 5k words) are said to be optimal for NLI (and similar remarks hold for SQuAD).[vocabs-NEG], [SUB-NEG]",vocabs,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7959,"My impression is that most papers on NLI use much larger vocabs, no?[papers-NEG, vocabs-NEG], [CMP-NEG]",papers,vocabs,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 7960,"- Fig 3: This could really be drawn considerably better: make the dots bigger and their colors more distinct.[Fig-NEG, dots-NEG, colors-NEG], [PNF-NEG]",Fig,dots,colors,,,,PNF,,,,,NEG,NEG,NEG,,,,NEG,,,, 7961,"- Table 3: The differences here are quite small and perhaps the least compelling, but the same trends hold. [Table-NEG], [PNF-NEG]]",Table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7964,"Positive aspects: + Emphasis in model interpretability and its connection to psychological findings in emotions[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 7965,"+ The idea of using Tumblr data seems interesting, allowing to work with a large set of emotion categories, instead of considering just the binary task positive vs. negative.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7966,"Weaknesses: - A deeper analysis of previous work on the combination of image and text for sentiment analysis (both datasets and methods) and its relation with the presented work is necessary.[analysis-NEU, previous work-NEU], [SUB-NEG]",analysis,previous work,,,,,SUB,,,,,NEU,NEU,,,,,NEG,,,, 7967,"- The proposed method is not compared with other methods that combine text and image for sentiment analysis.[proposed method-NEG], [SUB-NEG]",proposed method,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 7968,"- The study is limited to just one dataset.[study-NEG, dataset-NEU], [SUB-NEG]",study,dataset,,,,,SUB,,,,,NEG,NEU,,,,,NEG,,,, 7969,"The paper presents interesting ideas and findings in an important challenging area.[ideas-POS], [EMP-POS]",ideas,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7970,"The main novelties of the paper are: (1) the use of Tumblr data,[novelties-POS], [NOV-POS]",novelties,,,,,,NOV,,,,,POS,,,,,,POS,,,, 7971,"(2) the proposed CNN architecture, combining images and text (using word embedding.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 7973,"Some related works are mentioned in the paper, but those are spread in different sections.[related works-NEU], [PNF-NEG]",related works,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 7974,"It's hard to get a clear overview of the previous research: datasets, methods and contextualization of the proposed approach in relation with previous work.[datasets-NEU, method-NEU, proposed approach-NEU], [CMP-NEG]",datasets,method,proposed approach,,,,CMP,,,,,NEU,NEU,NEU,,,,NEG,,,, 7976,"Also, at some point authors should compare their proposal with previous work.[proposal-NEU, previous work-NEU], [CMP-NEG]",proposal,previous work,,,,,CMP,,,,,NEU,NEU,,,,,NEG,,,, 7977,"More comments: - Some figures could be more complete: to see more examples in Fig 1, 2, 3 would help to understand better the dataset and the challenges.[figures-NEG], [PNF-NEG]",figures,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7978,"- In table 4, for example, it would be nice to see the performance on the different emotion categories.[table-NEU, performance-NEU], [PNF-NEU]",table,performance,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 7979,"- It would be interesting to see qualitative visual results on recognitions.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7981,"but I think authors should improve the aspects I mention for its publication. [null], [REC-NEU]",null,,,,,,REC,,,,,,,,,,,NEU,,,, 7984,"The author give strong and convincing justifications based on the Lagrangian dual of the Bellman equation (although not new, introducing this as the justification for the architecture design is plausible).[justifications-POS], [CMP-POS, EMP-POS]",justifications,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 7985,"There are several drawbacks of the current format of the paper: 1. The algorithm is vague.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 7986,"Alg 1 line 5: 'closed form': there is no closed form in Eq(14).[Alg-NEU, line-NEU, Eq-NEG], [EMP-NEG]",Alg,line,Eq,,,,EMP,,,,,NEU,NEU,NEG,,,,NEG,,,, 7988,"line 6: Decay O(1/t^beta). This is indeed vague albeit easy to understand.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 7989,"The algorithm requires that every step is crystal clear.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7990,"2. Also, there are several format error which may be due to compiling, e.g., line 2 of Abstract,'Dual-AC ' (an extra space).[format error-NEG], [PNF-NEG]",format error,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7991,"There are many format errors like this throughout the paper.[format errors-NEG], [PNF-NEG]",format errors,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 7992,"The author is suggested to do a careful format check.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 7993,"3. The author is suggested to explain more about the necessity of introducing path regularization and SDA. The current justification is reasonable but too brief.[justification-NEU], [EMP-NEU, SUB-NEU]",justification,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 7994,"4. The experimental part is ok to me, but not very impressive.[experimental part-NEU], [EMP-NEU]",experimental part,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 7998,"The resulting model is general-purpose and experiments demonstrate efficacy on few-shot image classification and a range of reinforcement learning tasks.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 7999,"Strengths - The proposed model is a generic meta-learning useful for both classification and reinforcement learning.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8000,"- A wide range of experiments are conducted to demonstrate performance of the proposed method.[experiments-POS, performance-NEU, method-NEU], [SUB-POS]",experiments,performance,method,,,,SUB,,,,,POS,NEU,NEU,,,,POS,,,, 8001,"Weaknesses - Design choices made for the reinforcement learning setup (e.g. temporal convolutions) are not necessarily applicable to few-shot classification.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8002,"- Discussion of results relative to baselines is somewhat lacking.[Discussion-NEG, results-NEU, baselines-NEU], [CMP-NEG, SUB-NEG]",Discussion,results,baselines,,,,CMP,SUB,,,,NEG,NEU,NEU,,,,NEG,NEG,,, 8003,"The proposed approach is novel to my knowledge and overcomes specificity of previous approaches while remaining efficient.[approach-POS], [NOV-POS]",approach,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8004,"The depth of the TC block is determined by the sequence length.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8005,"In few-shot classification, the sequence length can be known a prior.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8006,"How is the sequence length determined for reinforcement learning tasks?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8007,"In addition, what is done at test-time if the sequence length differs from the sequence length at training time?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8008,"The causality assumption does not seem to apply to the few-shot classification case.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8009,"Have the authors considered lifting this restriction for classification and if so does performance improve?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8010,"The Prototypical Networks results in Tables 1 and 2 do not appear to match the performance reported in Snell et al. (2017).[results-NEG, Tables-NEG, performance-NEG], [EMP-NEG]",results,Tables,performance,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 8011,"The paper is well-written overall.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8012,"Some additional discussion of the results would be appreciated (for example, explaining why the proposed method achieves similar performance to the LSTM/OPSRL baselines).[discussion-NEU, results-NEU], [SUB-NEU]",discussion,results,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 8013,"I am not following the assertion in 5.2.3 that MAML adaption curves can be seen as an upper bound on the performance of gradient-based methods.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8014,"I am wondering if the authors can clarify this point.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8015,"Overall, the proposed approach is novel and achieves good results on a range of tasks.[approach-POS, results-POS], [EMP-POS, NOV-POS]",approach,results,,,,,EMP,NOV,,,,POS,POS,,,,,POS,POS,,, 8016,"EDIT: I have read the author's comments and am satisfied with their response. I believe the paper is suitable for publication in ICLR.[paper-POS], [APR-POS]",paper,,,,,,APR,,,,,POS,,,,,,POS,,,, 8021,"--- the new algorithm is 10 times faster and requires only 1/100 resources, and the performance gets worse only slightly.[algorithm-POS], [EMP-POS]",algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8022,"Overall, the paper is well-written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8023,"Although the methodology within the paper appears to be incremental over previous NAS method, the efficiency got improved quite significantly.[methodology-POS], [EMP-POS]",methodology,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8029,"This test is expected to experimentally support the previous theoretical analysis by Arora et al. (2017).[test-NEU], [EMP-NEU]",test,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8030,"The further theoretical analysis is also performed showing that for encoder-decoder GAN architectures the distributions with low support can be very close to the optimum of the specific (BiGAN) objective.[theoretical analysis-NEU], [EMP-NEU]",theoretical analysis,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8033,"So, the general claim is supported.[claim-POS], [EMP-POS]",claim,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8036,"That's why I would recommend to reevaluate the results visually, which may lead to some change in the number of near duplicates and consequently the final support estimates.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8037,"To sum up, I think that the general idea looks very natural and the results are supportive.[idea-POS, results-POS], [EMP-POS]",idea,results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 8038,"On theoretical side, the results seem fair (though I didn't check the proofs) and, being partly based on the previous results of Arora et al. (2017), clearly make a step further.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8043,"These tasks are clearly different, as nicely shown by the authors' example of do(mustache 1) versus given mustache 1 (a sample from the latter distribution contains only men).[tasks-NEU], [EMP-POS]",tasks,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 8045,"The example images look convincing to me.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8046,"I like the idea of this paper.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8047,"IMO, it is a very nice, clean, and useful approach of combining causality and the expressive power of neural networks.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8048,"The paper has the potential of conveying the message of causality into the ICLR community and thereby trigger other ideas in that area.[paper-POS], [APR-POS, IMP-POS]",paper,,,,,,APR,IMP,,,,POS,,,,,,POS,POS,,, 8049,"For me, it is not easy to judge the novelty of the approach, but the authors list related works, none of which seems to solve the same task.[novelty-POS, related works-NEU], [NOV-NEU, CMP-POS]",novelty,related works,,,,,NOV,CMP,,,,POS,NEU,,,,,NEU,POS,,, 8050,"The presentation of the paper, however, should be improved significantly before publication.[presentation-POS], [PNF-NEU]",presentation,,,,,,PNF,,,,,POS,,,,,,NEU,,,, 8051,"(In fact, because of the presentation of the paper, I was hesitating whether I should suggest acceptance.)[presentation-NEG], [PNF-NEU, REC-NEU]",presentation,,,,,,PNF,REC,,,,NEG,,,,,,NEU,NEU,,, 8053,"There is a risk that in its current state the paper will not generate much impact, and that would be a pity.[paper-NEG], [IMP-NEG]",paper,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 8054,"I would therefore like to ask the authors to put a lot of effort into improving the presentation of the paper.[presentation-NEG], [PNF-NEU]",presentation,,,,,,PNF,,,,,NEG,,,,,,NEU,,,, 8055,"- I believe that I understand the authors' intention of the caption of Fig. 1, but samples outside the dataset is a misleading formulation.[Fig-NEU], [EMP-NEG]",Fig,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8056,"Any reasonable model does more than just reproducing the data points.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8057,"I find the argumentation the authors give in Figure 6 much sharper.[Figure-POS], [EMP-POS]",Figure,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8058,"Even better: add the expression P(male 1 | mustache 1) 1.[null], [PNF-POS]",null,,,,,,PNF,,,,,,,,,,,POS,,,, 8059,"Then, the difference is crystal clear.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 8060,"- The difference between Figures 1, 4, and 6 could be clarified. [Figures-NEU], [EMP-NEU]",Figures,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8061,"- The list of prior work on learning causal graphs seems a bit random.[prior work-NEG], [CMP-NEG]",prior work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8064,"The authors seem to switch between Gender and Male being random variables.Make this consistent, please. [null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 8065,"- There are many typos and comma mistakes.[typos-NEG], [CLA-NEG]",typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8066,"- I would introduce the do-notation much earlier.[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 8067,"The paragraph on p. 2 is now written without do-notation (intervening Mustache 1 would not change the distribution).[paragraph-NEG], [PNF-NEG]",paragraph,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8068,"But this way, the statements are at least very confusing (which one is the distribution?).[statements-NEG], [CLA-NEG]",statements,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8070,"To me, it seems that this is a causal model with a neural network (NN) modeling the functions that appear in the SCM.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8074,"- Fig 1: which model is used to generate the conditional sample? [Fig-NEU], [EMP-NEU]",Fig,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8075,"- The notation changes between E and N and Z for the noises.[notation-NEU], [EMP-NEU]",notation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8076,"I believe that N is supposed to be the noise in the SCM, but then maybe it should not be called E at the beginning.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8077,"- I believe Prop 1 (as it is stated) is wrong.[Prop-NEG], [EMP-NEG]",Prop,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8080,"Also, I believe the Z should be a vector, not a set. [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8081,"- Below eq. (1), I am not sure what the V in P_V refers to.[eq-NEU], [EMP-NEU]",eq,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8082,"- The concept of data probability density function seems weird to me.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8083,"Either it is referring to the fitted model, then it's a bad name, or it's an empirical distribution, then there is no pdf, but a pmf.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8084,"- Many subscripts are used without explanation.[subscripts-NEG], [PNF-NEG]",subscripts,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8085,"r -> real? g -> generating? G -> generating? Sometimes, no subscripts are used (e.g., Fig 4 or figures in Sec. 8.13)[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 8086,"- I would get rid of Theorem 1 and explain it in words for the following reasons.[Theorem-NEG], [EMP-NEG]",Theorem,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8087,"(1) What is an informal theorem?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8088,"(2) It refers to equations appearing much later.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8089,"(3) It is stated again later as Theorem 2.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8090,"- Also: the name P_g does not appear anywhere else in the theorem, I think.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8092,"The main point is that the intervention distributions are correct (this fact seems to be there, but is hidden in the CIGN notation in the corollary).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8093,"- Re. the formulation in Thm 2: is it clear that there is a unique global optimum (my intuition would say there could be several), thus: better write _a_ global minimum?[Thm-NEU], [EMP-NEU]",Thm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8094,"- Fig. 3 was not very clear to me.[Fig-NEU], [CLA-NEG]",Fig,,,,,,CLA,,,,,NEU,,,,,,NEG,,,, 8095,"I suggest to put more information into its caption.[information-NEU], [SUB-NEU]",information,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8096,"- In particular, why is the dataset not used for the causal controller?[dataset-NEU], [EMP-NEU]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8097,"I thought, that it should model the joint (empirical) distribution over the labels, and this is part of the dataset.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8099,"- IMO, the structure of the paper can be improved.[structure-NEG, paper-NEU], [PNF-NEG]",structure,paper,,,,,PNF,,,,,NEG,NEU,,,,,NEG,,,, 8102,"An alternative could be: Sec 1: Introduction Sec 1.1: Related Work Sec 2: Causal Models Sec 2.1: Causal Models using Generative Models (old: CIGM) Sec 3: Causal GANs Sec 3.1: Architecture (including controller) Sec 3.2: loss functions ... Sec 4: Empricial Results (old: Sec. 6: Results) - Causal Graph 1 is not a proper reference (it's Fig 23 I guess).[Sec-NEU], [PNF-NEU]",Sec,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8103,"Also, it is quite important for the paper, I think it should be in the main part.[paper-NEU, main part-NEU], [PNF-NEU]",paper,main part,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 8104,"- There are different references to the Appendix, Suppl. Material, or Sec. 8 -- please be consistent (and try to avoid ambiguity by being more specific -- the appendix contains ~20 pages).[references-NEU], [PNF-NEG]",references,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 8107,"- proposition from Goodfellow -> please be more precise - What is Fig 8 used for?[proposition-NEU, Fig-NEU], [EMP-NEU, PNF-NEU]",proposition,Fig,,,,,EMP,PNF,,,,NEU,NEU,,,,,NEU,NEU,,, 8108,"Is it not sufficient to have and discuss Fig 23?[Fig-NEU], [SUB-NEU]",Fig,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8109,"- IMO, Section 5.3. should be rewritten (also, maybe include another reference for BEGAN).[Section-NEU], [PNF-NEU]",Section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8110,"- There is a reference to Lemma 15. However, I have not found that lemma.[Lemma-NEU], [EMP-NEG]",Lemma,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8111,"- I think it's quite interesting that the framework seems to also allow answering counterfactual questions for realizations that have been sampled from the model, see Fig 16.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8112,"This is the case since for the generated realizations, the noise values are known.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8113,"The authors may think about including a comment on that issue.[issue-NEU], [EMP-NEU]",issue,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8114,"- Since this paper's main proposal is a methodological one, I would make the publication conditional on the fact that code is released. [main proposal-NEU], [REC-NEU]",main proposal,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 8116,"Quality: The work has too many gaps for the reader to fill in.[work-NEG], [IMP-NEG]",work,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 8118,"I am not sure how this is achieved in this work.[work-NEU], [EMP-NEU]",work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8119,"The matrix is not isomorphic invariant and the different clusters don't share a common model.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8120,"Even implicit models should be trained with some way to leverage graph isomorphisms and pattern similarities between clusters.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8121,"How can such a limited technique be generalizing?[technique-NEU], [EMP-NEU]",technique,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8122,"There is no metric in the results showing how the model generalizes, it may be just overfitting the data.[results-NEU], [EMP-NEG]",results,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8123,"n Clarity: The paper organization needs work; there are also some missing pieces to put the NN training together.[paper-NEG], [CLA-NEG, SUB-NEG, PNF-NEG]",paper,,,,,,CLA,SUB,PNF,,,NEG,,,,,,NEG,NEG,NEG,, 8124,"It is only in Section 2.3 that the nature of G_i^prime becomes clear,[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8125,"although it is used in Section 2.2. Equation (3) is rather vague for a mathematical equation.[Section-NEU], [EMP-NEG]",Section,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8126,"From what I understood from the text, equation (3) creates a binary matrix from the softmax output using an indicator function.[equation-NEU], [EMP-NEU]",equation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8127,"If the output is binary, how can the gradients backpropagate? Is it backpropagating with a trick like the Gumbel-Softmax trick of Jang, Gu, and Poole 2017 or Bengio's path derivative estimator?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8128,"This is a key point not discussed in the manuscript.[manuscript-NEG], [SUB-NEG]",manuscript,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8129,"And if I misunderstood the sentence ""turn re_G into a binary matrix"" and the values are continuous, wouldn't the discriminator have an easy time distinguishing the generated data from the real data.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8130,"And wouldn't the generator start working towards vanishing gradients in its quest to saturate the re_G output?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8131,"Originality: The work proposes an interesting approach: first cluster the network, then learning distinct GANs over each cluster.[work-POS, approach-POS], [EMP-POS, NOV-POS]",work,approach,,,,,EMP,NOV,,,,POS,POS,,,,,POS,POS,,, 8133,"There is no contribution in the GAN / neural network aspect.[contribution-NEG], [IMP-NEG]",contribution,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 8134,"It is also unclear whether the model generalizes.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8135,"I don't think this is a good fit for ICLR.[null], [APR-NEG]",null,,,,,,APR,,,,,,,,,,,NEG,,,, 8136,"Significance: Generating graphs is an important task in in relational learning tasks, drug discovery, and in learning to generate new relationships from knowledge bases.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 8137,"The work itself, however, falls short of the goal.[work-NEG], [IMP-NEG]",work,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 8138,"At best the generator seems to be working but I fear it is overfitting.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8139,"The contribution for ICLR is rather minimal, unfortunately.[contribution-NEG], [APR-NEG]",contribution,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 8145,"The assumption here is that labels (aka outputs) are easily available for all possible inputs, but we don't want to give a constraint solver all the input-output examples, because it will slow down the solver's execution.[assumption-NEU], [EMP-NEU]",assumption,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8146,"The main baseline technique CEGIS (counterexample-guided inductive synthesis) addresses this problem by starting with a small set of examples, solving a constraint problem to get a hypothesis program, then looking for counterexamples where the hypothesis program is incorrect.[baseline technique-NEU], [EMP-NEU]",baseline technique,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8147,"This paper instead proposes to learn a surrogate function for choosing which examples to select.[surrogate function-NEU], [NOV-NEU, EMP-NEU]",surrogate function,,,,,,NOV,EMP,,,,NEU,,,,,,NEU,NEU,,, 8152,"Results show that the approach is a bit faster than CEGIS in a synthetic drawing domain.[Results-POS], [EMP-NEU]",Results,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 8154,"There is a start at an interesting idea here, and I appreciate the thorough treatment of the background, including CEGIS and submodularity as a motivation for doing greedy active learning, although I'd also appreciate a discussion of relationships between this approach and what is done in the active learning literature.[idea-POS, background-POS], [EMP-POS, NOV-POS, CMP-POS]",idea,background,,,,,EMP,NOV,CMP,,,POS,POS,,,,,POS,POS,POS,, 8155,"Once getting into the details of the proposed approach, the quality takes a downturn, unfortunately.[approach-NEG, quality-NEG], [EMP-NEG]",approach,quality,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8156,"Main issues: - It's not generally scalable to build a neural network whose size scales with the number of possible inputs.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8157,"I can't see how this approach would be tractable in more standard program synthesis domains where inputs might be lists of arrays or strings, for example.[approach-NEG], [EMP-NEG, IMP-NEG]",approach,,,,,,EMP,IMP,,,,NEG,,,,,,NEG,NEG,,, 8158,"It seems that this approach only works due to the peculiarities of the formulation of the only task that is considered, in which the program maps a pixel location in 32x32 images to a binary value.[approach-NEG], [EMP-NEU]",approach,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 8159,"- It's odd to write we do not suggest a specific neural network architecture for the middle layers, one should seelect whichever architecture that is appropriate for the domain at hand.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8160,"Not only is it impossible to reproduce a paper without any architectural details, but the result is then that Fig 3 essentially says inputs -> magic -> outputs.[architectural details-NEG, Fig-NEG], [SUB-NEG, PNF-NEG]",architectural details,Fig,,,,,SUB,PNF,,,,NEG,NEG,,,,,NEG,NEG,,, 8161,"Given that I don't even think the representation of inputs and outputs is practical in general, I don't see what the contribution is here.[contribution-NEG], [EMP-NEU, IMP-NEG]",contribution,,,,,,EMP,IMP,,,,NEG,,,,,,NEU,NEG,,, 8162,"- This paper is poor in the reproducibility category.[paper-NEG, reproducibility-NEG], [IMP-NEG]",paper,reproducibility,,,,,IMP,,,,,NEG,NEG,,,,,NEG,,,, 8163,"The architecture is never described, it is light on details of the training objective, it's not entirely clear what the DSL used in the experiments is (is Figure 1 the DSL used in experiments), and it's not totally clear how the random images were generated (I assume values for the holes in Figure 1 were sampled from some distribution, and then the program was executed to generate the data?).[architecture-NEG], [SUB-NEG, EMP-NEG]",architecture,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 8164,"- Experiments are only presented in one domain, and it has some peculiarities relative to more standard program synthesis tasks (e.g., it's tractable to enumerate all possible inputs).[Experiments-NEG], [SUB-NEG, EMP-NEG]",Experiments,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 8165,"It'd be stronger if the approach could also be demonstrated in another domain.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8166,"- Technical point: it's not clear to me that the training procedure as described is consistent with the desired objective in sec 3.3.[procedure-NEG, sec-NEG], [EMP-NEG]",procedure,sec,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8167,"Question for the authors: in the limit of infinite training data and model capacity, will the neural network training lead to a model that will reproduce the probabilities in 3.3?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8168,"Typos: - The paper needs a cleanup pass for grammar, typos, and remnants like Figure blah shows our neural network architecture on page 5.[paper-NEG, grammar-NEG, page-NEU], [CLA-NEG, PNF-NEG]",paper,grammar,page,,,,CLA,PNF,,,,NEG,NEG,NEU,,,,NEG,NEG,,, 8169,"Overall: There's the start of an interesting idea here,[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8170,"but I don't think the quality is high enough to warrant publication at this time. [quality-NEG], [REC-NEG]",quality,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 8177,"While the idea is interesting and might be a good alternative to standard CNNs,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8179,"It unfortunately only experiments with CCNN architectures with a small number (eg 3) layers.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 8180,"They do well on MNIST, but MNIST performance is hardly informative as many supervised techniques achieve near perfect results.[results-NEU], [EMP-POS]",results,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 8181,"The CIFAR-10, STL-10, and SVHN results are disappointing.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8182,"CCNNs do not outperform the prior CNN results listed in Table 2,3,4.[results-NEU], [EMP-NEG]",results,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8183,"Moreover, these tables do not even cite more recent higher-performing CNNs.[tables-NEG], [SUB-NEG, CMP-NEG]",tables,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 8184,"See results table in (*) for CIFAR-10 and SVHN results on recent ResNet and DenseNet CNN designs which far outperform the methods listed in this paper.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8185,"The problem appears to be that CCNNs are not tested in a regime competitive with the state-of-the-art CNNs on the datasets used.Why not?[problem-NEG], [CMP-NEG, EMP-NEG]",problem,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEG,,, 8187,"I would like to see results for CCNNs with many layers (eg 16+ layers) rather than just 3 layers.[results-NEU], [SUB-NEU, EMP-NEU]",results,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 8188,"Do such CCNNs achieve performance compatible with ResNet/DenseNet on CIFAR or SVHN?[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8189,"Given that CIFAR and SVHN are relatively small datasets, training and testing larger networks on them should not be computationally prohibitive.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 8190,"In addition, for such experiments, a clear report of parameters and FLOPs for each network should be included in the results table.[experiments-NEU, results table-NEU], [SUB-NEU]",experiments,results table,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 8191,"This would assist in understanding tradeoffs in the design space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8192,"Additional questions: What is the receptive field of the CCNNs vs those of the standard CNNs to which they are compared?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 8193,"If the CCNNs have effectively larger receptive field, does this create a cost in FLOPs compared to standard CNNs?[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 8194,"For CCNNs, why does the CCAE initialization appear to be essential to achieving high performance on CIFAR-10 and SVHN? [performance-NEU], [CMP-NEU]",performance,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8195,"Standard CNNs, trained on supervised image classification tasks do not appear to be dependent on initialization schemes that do unsupervised pre-training.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8196,"Such dependence for CCNNs appears to be a weakness in comparison.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8200,"The idea is simple and it seems to work for the presented examples.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8201,"However, they talk about gradient descent using this extra term, but I'd like to see the derivatives of the proposed term depending on the parameters of the model (and this depends on the model!).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8202,"On the other hand, given the expression of the proposed regulatization, it seems to lead to non-convex optimization problems which are hard to solve. Any comment on that?[expression-NEU], [EMP-NEU]",expression,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8203,". Moreover, its results are not quantitatively compared to other Non-Linear generalizations of PCA/ICA designed for similar goals (e.g. those cited in the related work section or others which have been proved to be consistent non-linear generalizations of PCA such as: Principal Polynomial Analysis, Dimensionality Reduction via Regression that follow the family introduced in the book of Jolliffe, Principal Component Analysis).[results-NEG], [CMP-NEG]",results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8204,"Minor points: Fig.1 conveys not that much information.[Fig-NEG], [SUB-NEG]",Fig,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8208,"The paper shows an interesting result that the distilled low precision network actually performs better than high precision network.[result-POS], [CMP-POS]",result,,,,,,CMP,,,,,POS,,,,,,POS,,,, 8210,"but the contribution seems quite limited.[contribution-NEG], [SUB-NEG]",contribution,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8211,"Pros: 1. The paper is well written and easy to read. 2.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8212,"The paper reported some interesting result such as that the distilled low precision network actually performs better than high precision network, and that training jointly outperforms the traditional distillation method (fixing the teacher network) marginally.[result-POS, method-POS], [SUB-POS, CMP-POS, EMP-POS]",result,method,,,,,SUB,CMP,EMP,,,POS,POS,,,,,POS,POS,POS,, 8213,"Cons: 1. The name Apprentice seems a bit confusing with apprenticeship learning.[name-NEG], [PNF-NEG, CLA-NEG]",name,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 8214,"2. The experiments might be further improved by providing a systematic study about the effect of precisions in this work (e.g., producing more samples of precisions on activations and weights).[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8215,"3. It is unclear how the proposed method outperforms other methods based on fine-tuning.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8216,"It is also quite possible that after fine-tuning the compressed model usually performs quite similarly to the original model.[model-NEU], [CMP-NEG, EMP-NEU]]",model,,,,,,CMP,EMP,,,,NEU,,,,,,NEG,NEU,,, 8217,"The paper is well motivated and written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8219,"1. As the regularization constant increases, the performance first increases and then falls down -- this specific aspect is well known for constrained optimization problems.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8220,"Further, the sudden drop in performance also follows from vanishing gradients problem in deep networks.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8221,"The description for ReLUs in section 2.2 follows from these two arguments directly, hence not novel[description-NEG, section-NEU], [NOV-NEG]",description,section,,,,,NOV,,,,,NEG,NEU,,,,,NEG,,,, 8222,". Several of the key aspects here not addressed are: 1a. Is the time-delayed regularization equivalent to reducing the value (and there by bringing it back to the 'good' regime before the cliff in the example plots)? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8223,"1b. Why should we keep increasing the regularization constant beyond a limit?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8224,"Is this for compressing the networks (for which there are alternate procedures), or anything else.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8225,"In other words, for a non-convex problem (about whose landscape we know barely anything), if there are regimes of regularizers that work well (see point 2) -- why should we ask for more stronger regularizers?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8226,"Is there any optimization-related motivation here (beyond the single argument that networks are overparameterized)? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8227,"2. The proposed experiments are not very conclusive.[experiments-NEG], [EMP-NEG, IMP-NEG]",experiments,,,,,,EMP,IMP,,,,NEG,,,,,,NEG,NEG,,, 8228,"Firstly, the authors need to test with modern state-of-the-art architectures including inception and residual networks.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 8229,"Secondly, more datasets including imagenet needs to be tested.[datasets-NEU], [SUB-NEU]",datasets,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8230,"Unless these two are done, we cannot assertively say that the proposal seems to do interesting things.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 8231,"Thirdly, it is not clear what Figure 5 means in terms of goodness of learning.[Figure-NEG], [CLA-NEG]",Figure,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8232,"And lastly, although confidence intervals are reported for Figures 3,4 and Table 2, statistical tests needs to be performed to report p-values (so as to check if one model significantly beats the other).[Figures-NEU, Table-NEU], [EMP-NEU, SUB-NEU]",Figures,Table,,,,,EMP,SUB,,,,NEU,NEU,,,,,NEU,NEU,,, 8235,"These priors are indirectly induced from the data - the example discussed is via an empirical diagonal covariance assumption for a multivariate Gaussian. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8236,"The experimental results show the benefits of this approach.[experimental results-POS], [EMP-POS]",experimental results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8237,"The paper provides for a good read.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8238,"Comments: 1. How do the PAG scores differ when using a full covariance structure?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8239,"Diagonal covariances are still very restrictive.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8240,"2. The results are depicted with a latent space of 20 dimensions.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8241,"It will be informative to see how the model holds in high-dimensional settings.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8242,"And when data can be sparse. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8243,"3. You could consider giving the Discriminator, real data etc in Fig 1 for completeness as a graphical summary.[Fig-NEU], [PNF-NEU]",Fig,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8249,"Such results are hardly convincing, since the tuning of the parameter lambda plays a crucial role in the performance of the method.[performance-NEG, results-NEG], [EMP-NEG]",performance,results,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8250,"More importantly, The heuristic proposed in the paper is interesting and promising in some respects[paper-POS], [EMP-POS]",paper,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8253,"2. High level paper - I believe the writing is a bit sloppy.[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8254,"For instance equation 3 takes the minimum over all m in C but C is defined to be a set of c_1, ..., c_k, and other examples (see section 4 below).[equation-NEG], [EMP-NEU]",equation,,,,,,EMP,,,,,NEG,,,,,,NEU,,,, 8255,"This is unfortunate because I believe this method, which takes as input a large complex network and compresses it so the loss in accuracy is small, would be really appealing to companies who are resource constrained but want to use neural network models.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8256,"3. High level technical - I'm confused at the first and second lines of equation (19).[equation-NEG], [EMP-NEG]",equation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8257,"In the first line, shouldn't the first term not contain Delta W ?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8258,"In the second line, shouldn't the first term be tilde{mathcal{L}}(W_0 + Delta W) ?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8259,"- For CIFAR-10 and SVHN you're using Binarized Neural Networks and the two nice things about this method are (a) that the memory usage of the network is very small, and (b) network operations can be specialized to be fast on binary data.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8260,"My worry is if you're compressing these networks with your method are the weights not treated as binary anymore?[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8261,"Now I know in Binarized Neural Networks they keep a copy of real-valued weights so if you're just compressing these then maybe all is alright.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8262,"But if you're compressing the weights _after_ binarization then this would be very inefficient because the weights won't likely be binary anymore and (a) and (b) above no longer apply.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8263,"- Your compression ratio is much higher for MNIST but your accuracy loss is somewhat dramatic, especially for MNIST (an increase of 0.53 in error nearly doubles your error and makes the network worse than many other competing methods: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354).[accuracy-NEU], [EMP-NEU]",accuracy,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8264,"What is your compression ratio for 0 accuracy loss?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8265,"I think this is a key experiment that should be run as this result would be much easier to compare with the other methods.[experiment-NEU, result-NEU], [CMP-NEU, EMP-NEU]",experiment,result,,,,,CMP,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 8266,"- Previous compression work uses a lot of tricks to compress convolutional weights. Does your method work for convolutional layers?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8268,"4. Low level technical - The end of Section 2 has an extra 'p' character[Section-NEG], [PNF-NEG]",Section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8269,"- Section 3.1: Here, X and y define a set of samples and ideal output distributions we use for training this sentence is a bit confusing.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8270,"Here y isn't a distribution, but also samples drawn from some distribution. Actually I don't think it makes sense to talk about distributions at all in Section 3.[Section-NEG], [CLA-NEG]",Section,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8271,"- Section 3.1: W is the learnt model...hat{W} is the final, trained model This is unclear: W and hat{W} seem to describe the same thing.[Section-NEU], [EMP-NEG]",Section,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8273,"5. Review summary While the trust-region-like optimization of the method is nice and I believe this method could be useful for practitioners, I found the paper somewhat confusing to read.[method-POS, paper-NEG], [CLA-NEG, IMP-POS]",method,paper,,,,,CLA,IMP,,,,POS,NEG,,,,,NEG,POS,,, 8274,"This combined with some key experimental questions I have make me think this paper still needs work before being accepted to ICLR.[paper-NEU], [REC-NEU, APR-NEU]",paper,,,,,,REC,APR,,,,NEU,,,,,,NEU,NEU,,, 8280,"Comments: 1. I recommend the authors to tone down their claims.[claims-NEU], [CLA-NEU]",claims,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8281,"For example, the authors mentioned that there has been no complete implementation of established deep learning approaches in the abstract, however, the authors did not define what is complete.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8282,"Actually, the SecureML paper in S&P'17 should be able to privately evaluate any neural networks, although at the cost of multi-round information exchanges between the client and server.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8283,"Also, the claim that we show efficient designs is very thin to me since there are no experimental comparisons between the proposed method and existing works.[experimental comparisons-NEG], [CMP-NEG]",experimental comparisons,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8286,"For a relatively shallow model (as this paper has used), level FHE might be faster than the binary FHE.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8287,"2. I recommend the author to compare existing adder and multiplier circuits with your circuits to see in what perspective your design is better.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 8289,"3. I appreciate that optimizations such as low-precision and point-wise convolution are discussed in this paper.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8296,"Review: Pros This paper formulates the weight quantization of deep networks as an optimization problem in the perspective of loss and solves the problem with a proximal newton algorithm. [paper-NEU, algorithm-NEU], [EMP-NEU]",paper,algorithm,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8297,"They extend the scheme to allow the use of different scaling parameters and to m-bit quantization.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8298,"Experiments demonstrate the proposed scheme outperforms the state-of-the-art methods.[Experiments-NEU, proposed scheme-POS], [EMP-POS]",Experiments,proposed scheme,,,,,EMP,,,,,NEU,POS,,,,,POS,,,, 8299,"The experiments are complete and the writing is good.[experiments-POS, writing-POS], [CLA-POS, SUB-POS]",experiments,writing,,,,,CLA,SUB,,,,POS,POS,,,,,POS,POS,,, 8300,"Cons Although the work seems convincing, it is a little bit straight-forward derived from the original binarization scheme (Hou et al., 2017) to tenarization or m-bit since there are some analogous extension ideas (Lin et al., 2016b, Li & Liu, 2016b)[work-POS], [NOV-NEU, CMP-NEG]",work,,,,,,NOV,CMP,,,,POS,,,,,,NEU,NEG,,, 8301,". Algorithm 2 and section 3.2 and 3.3 can be seen as additive complementary. [Algorithm-POS, section-POS], [SUB-POS]",Algorithm,section,,,,,SUB,,,,,POS,POS,,,,,POS,,,, 8305,"In NAS, the practitioners have to retrain for every new architecture in the search process, but in ENAS this problem is avoided by sharing parameters and using discrete masks.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8306,"In both approaches, reinforcement learning is used to learn a policy that maximizes the expected reward of some validation set metric.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8307,"Since we can encode a neural network as a sequence, the policy can be parameterized as an RNN where every step of the sequence corresponds to an architectural choice.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8308,"In their experiments, ENAS achieves test set metrics that are almost as good as NAS, yet require significantly less computational resources and time.[experiments-POS], [CMP-POS]",experiments,,,,,,CMP,,,,,POS,,,,,,POS,,,, 8310,"Initially it seems like the controller can choose any of B operations in a fixed number of layers along with choosing to turn on or off ay pair of skip connections.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8311,"However, in practice we see that the search space for modeling both skip connections and choosing convolutional sizes is too large, so the authors use only one restriction to reduce the size of the state space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8312,"This is a limitation, as the model space is not as flexible as one would desire in a discovery task.[limitation-NEU], [EMP-NEU]",limitation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8313,"Moreover, their best results (and those they choose to report in the abstract) are due to fixing 4 parallel branches at every layer combined with a 1 x 1 convolution, and using ENAS to learn the skip connections.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8314,"Thus, they are essentially learning the skip connections while using a human-selected model.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8315,"ENAS for RNNs is similar: while NAS searches for a new architecture, the authors use a recurrent highway network for each cell and use ENAS to find the skip connections.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8316,"Thus, it seems like the term Efficient Neural Architecture Search promises too much since in both tasks they are essentially only using the controller to find skip connections.[tasks-NEU], [EMP-NEU]",tasks,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8317,"Although finding an appropriate architecture for skip connections is an important task, finding an efficient method to structure RNN cells seems like a significantly more important goal.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8318,"Overall, the paper is well-written, and it brings up an important idea: that parameter sharing is important for discovery tasks so we can avoid re-training for every new architecture in the search process.[paper-POS, idea-POS], [CLA-POS, IMP-POS]",paper,idea,,,,,CLA,IMP,,,,POS,POS,,,,,POS,POS,,, 8319,"Moreover, using binary masks to control network path (essentially corresponding to training different models) is a neat idea.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8320,"It is also impressive how much faster their model performs on tasks without sacrificing much performance.[model-NEU, performance-NEU], [EMP-POS]",model,performance,,,,,EMP,,,,,NEU,NEU,,,,,POS,,,, 8321,"The main limitation is that the best architectures as currently described are less about discovery and more about human input;[limitation-NEU], [EMP-NEU]",limitation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8322,"-- finding a more efficient search path would be an important next step.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 8326,"The paper presents significant, novel work in a straightforward, clear and engaging way.[paper-POS], [NOV-POS]",paper,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8327,"It represents an elegant combination of ideas, and a well-rounded combination of theory and experiments.[theory-POS, experiments-POS], [EMP-POS]",theory,experiments,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 8329,"Major comments: No major flaws.[flaws-POS], [EMP-POS]",flaws,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8330,"The introduction is particular well written, as an extremely clear and succinct introduction to optimal transport.[introduction-POS], [CLA-POS, EMP-POS]",introduction,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 8331,"Minor comments: In the introduction, for VAEs, it's not the case that f(X) matches the target distribution.[introduction-NEG], [EMP-NEG]",introduction,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8334,"In the comparison to previous work, please explicitly mention the EMD algorithm, since it's used in the experiments.[previous work-NEG], [CMP-NEG]",previous work,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8335,"It would've been nice to see an experimental comparison to the algorithm proposed by Arjovsky et al. (2017), since this is mentioned favorably in the introduction.[experimental comparison-NEU, introduction-NEU], [SUB-NEU]",experimental comparison,introduction,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 8336,"In (3), R is not defined.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8338,"In section 3.1, it would be helpful to cite a reference to support the form of dual problem.[section-NEG, reference-NEG], [SUB-NEG]",section,reference,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 8339,"Perhaps the authors have just done a good job of laying the groundwork, but the dual-based approach proposed in section 3.1 seems quite natural.[proposed approach-NEU, section-NEU], [CMP-NEU]",proposed approach,section,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 8340,"Is there any reason this sort of approach wasn't used previously, even though this vein of thinking was being explored for example in the semi-dual algorithm?[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8341,"If so, it would interesting to highlight the key obstacles that a naive dual-based approach would encounter and how these are overcome. In algorithm 1, it is confusing to use u to mean both the parameters of the neural net and the function represented by the neural net.[approach-NEG, algorithm-NEG], [EMP-NEG, CLA-NEG]",approach,algorithm,,,,,EMP,CLA,,,,NEG,NEG,,,,,NEG,NEG,,, 8342,"There are many terms in R_e in (5) which appear to have no effect on optimization, such as a(x) and b(y) in the denominator and - 1. It seems like R_e boils down to just the entropy.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8343,"The definition of F_epsilon is made unnecessarily confusing by the omission of x and y as arguments.[definition-NEG], [CLA-NEG]",definition,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8344,"It would be great to mention very briefly any helpful intuition as to why F_epsilon and H_epsilon have the forms they do.[intuition-NEU], [EMP-NEU]",intuition,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8345,"In the discussion of Table 1, it would be helpful to spell out the differences between the different Bary proj algorithms, since I would've expected EMD, Sinkhorn and Alg. 1 with R_e to all perform similarly.[discussion-NEU], [EMP-NEU]",discussion,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8346,"In Figure 4 some of the samples are quite non-physical.[Figure-NEG], [EMP-NEG]",Figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8347,"Is their any helpful intuition about what goes wrong?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8348,"What cost is used for generative modeling on MNIST?[cost-NEU], [SUB-NEU]",cost,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8349,"For generative modeling on MNIST, 784d vector is less clear than 784-dimensional vector.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8351,"It seems a bit strange to say The property we gain compared to other generative models is that our generator is a nearly optimal map w.r.t. this cost as if this was an advantage of the proposed method, since arguably there isn't a really natural cost in the generative modeling case (unlike in the domain adaptation case); the latent variable seems kind of conceptually distinct from observation space.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8356,"This led to improvement over SOTA in a task of caption ranking on MS-COCO, nd good performance on flickr30K.[improvement-POS], [EMP-POS]",improvement,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8357,"The main issue with this paper is novelty.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8362,"While it is good to know that using hard negatives improves recall measures on coco, it is not clear that this paper provides enough novel insight to be interesting enough for the ICLR audience.[insight-NEG], [NOV-NEG, APR-NEG]",insight,,,,,,NOV,APR,,,,NEG,,,,,,NEG,NEG,,, 8365,"The main issues with the paper is that its contributions are not new.[contributions-NEG], [NOV-NEG]",contributions,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8366,"* The first claimed contribution is to use typing at decoding time (they don't say why but this helps search and learning). [contribution-NEU], [EMP-NEG]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8367,"Restricting the type of the decoded tokens based on the programming language has already been done by the Neural Symbolic Machines of Liang et al. 2017. [null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 8368,"Then Krishnamurthy et al. expanded that in EMNLP 2017 and used typing in a grammar at decoding time.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 8369,"I don't really see why the authors say their approach is simpler, it is only simpler because the sub-language of sql used in wikisql makes doing this in an encoder-decoder framework very simple, but in general sql is not regular.[approach-NEG], [EMP-NEG]",approach,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8370,"Of course even for CFG this is possible using post-fix notation or fixed-arity pre-fix notation of the language as has been done by Guu et al. 2017 for the SCONE dataset, and more recently for CNLVR by Goldman et al., 2017.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 8371,"So at least 4 papers have done that in the last year on 4 different datasets, and it is now close to being common practice so I don't really see this as a contribution.[contribution-NEG], [NOV-NEG]",contribution,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8372,"* The authors explain that they use a novel loss function that is better than an RL based function used by Zhong et al., 2017.[null], [NOV-NEU, CMP-NEU]",null,,,,,,NOV,CMP,,,,,,,,,,NEU,NEU,,, 8373,"If I understand correctly they did not implement Zhong et al. only compared to their numbers which is a problem because it is hard to judge the role of optimization in the results.[results-NEG], [CMP-NEG, EMP-NEG]",results,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEG,,, 8374,"Moreover, it seems that the problem they are trying to address is standard - they would like to use cross-entropy loss when there are multiple tokens that could be gold.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8375,"the standard solution to this is to just have uniform distribution over all gold tokens and minimize the cross-entropy between the predicted distribution and the gold distribution which is uniform over all tokens. [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8376,"The authors re-invent this and find it works better than randomly choosing a gold token or taking the max.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8377,"But again, this is something that has been done already in the context of pointer networks and other work like See et al. 2017 for summarization and Jia et al., 2016 for semantic parsing.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 8378,"* As for the good results - the data is new, so it is probable that numbers are not very fine-tuned yet so it is hard to say what is important and what not for final performance.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8379,"In general I tend to agree that using RL for this task is probably unnecessary when you have the full program as supervision.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8384,"While on MNIST and CIFAR, DTP and SDTP performed as well as backprop, they perform worse on ImageNet[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8385,". Furthermore, it becomes clear, that without CNN structure no really good performance is achieved neither on CIFAR nor on ImageNet [performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8386,"Pros: - The paper is nicely written and good to follow.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8387,"- Suggested modifications from DTP to STDP increase its biological plausibility without making its performance worse.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8388,"- The worse performance compared to backprop and CNNs underlines the open question how to yield biologically plausible AND efficient algorithms and network architectures.[performance-NEU], [IMP-NEU]",performance,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 8389,"Cons: - The title of the paper seems to general to me, since target propagation is the only algorithm compared against backpropagation.[title-NEG], [CLA-NEG]",title,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8390,"- Since the adaptions to DTP are rather small, the work does not contain much novelty.[work-NEU], [NOV-NEU]",work,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 8391,"It can rather be seen as an interesting empirical study, with egative result[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8393,"page 5: ""the the degree"" , ""specified as (....) followed by"" -> , ""as (....) followed by"" ?, - This notation probably stems from the code, but SAME and VALID could be nicer described as ""0 padding"" and ""no padding""[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8394,"for example. - page 8: ""applying BP to the brain"" sounds strange to me. [page-NEG], [CLA-NEG]",page,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8397,"The draft is well-written, and the method is clearly explained.[draft-POS, method-POS], [CLA-POS, EMP-POS]",draft,method,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 8398,"However, I have the following concerns for this draft: 1. The technical contribution is not enough.[technical contribution-NEG], [SUB-NEG]",technical contribution,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8399,"First, the use of reinforcement learning is quite straightforward.[null], [SUB-NEG, EMP-NEG]",null,,,,,,SUB,EMP,,,,,,,,,,NEG,NEG,,, 8401,"u2013 their major difference seems to be the use of ""remove"" instead of ""add"" when manipulating the parameters.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8402,"It is unclear whether this difference is substantial, and whether the proposed method is better than the architecture search method.[proposed method-NEU], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8403,"2. I also have concern with the time efficiency of the proposed method.[proposed method-NEU], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8404,"Reinforcement learning involves multiple rounds of knowledge distillation, and each knowledge distillation is an independent training process that requires many rounds of forward and backward propagations.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8405,"Therefore, the whole reinforcement learning process seems very time-consuming and difficult to be generalized to big models and large datasets (such as ImageNet).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8406,"It would be necessary for the authors to make direct discussions on this issue, in order to convince others that their proposed method has practical value.[discussions-NEU], [SUB-NEU]",discussions,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8411,"On the positive side the paper is well written and the problem is interesting.[paper-POS, problem-POS], [CLA-POS]",paper,problem,,,,,CLA,,,,,POS,POS,,,,,POS,,,, 8412,"On the negative side there is very limited innovation in the techniques proposed, that are indeed small variations of existing methods.[techniques proposed-NEG, existing methods-NEG], [SUB-NEG, CMP-NEG]]",techniques proposed,existing methods,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 8413,"This is a high-quality and clear paper looking at biologically-plausible learning algorithms for deep neural networks.[paper-POS], [CLA-POS, EMP-POS]",paper,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 8414,"The contributions here are: 1) experiments testing the DTP algorithm on more difficult datasets,[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8415,"2) proposing a minor modification of the DTP algorithm at the output layer,[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8416,"and 3) testing the DTP algorithm on locally-connected architectures.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8417,"These are all novel contributions, but each one seems incremental in the context of previous work on this and similar algorithms (E.G. Nokland, Direct Feedback Alignment Provides Learning in Deep Neural Networks, 2016; Baldi et al, Learning in the Machine: The Symmetries of the Deep Learning Channel, 2017). [contributions-NEU], [NOV-NEU]",contributions,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 8420,"Using the generated images, paper reports improvement in classification accuracy on various tasks.[accuracy-POS], [EMP-POS]",accuracy,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8423,"Despite the rich literature of this recent topic the related work section is rather convincing.[related work-POS], [CMP-POS]",related work,,,,,,CMP,,,,,POS,,,,,,POS,,,, 8425,"1 Major: - The size of the generated images is up to 26x31x22 which is limited (about half the size of the actual resolution of fMRI data).[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8426,"As a consequence results on decoding learning task using low resolution images can end up worse than with the actual data (as pointed out).[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8427,"What it means is that the actual impact of the work is probably limited.[work-NEG], [IMP-NEG]",work,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 8428,"- Generating high resolution images with GANs even on faces for which there is almost infinite data is still a challenge.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8429,"Here a few thousand data points are used.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 8430,"So it raises too concerns: First is it enough?[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 8431,"Using so-called learning curves is a good way to answer this.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8432,"Second is what are the contributions to the state-of-the-art of the 2 methods introduced?[contributions-NEU], [EMP-NEU, IMP-NEU]",contributions,,,,,,EMP,IMP,,,,NEU,,,,,,NEU,NEU,,, 8433,"Said differently, as there is no classification results using images produced by an another GAN architecture it is hard to say that the extra complexity proposed here (which is a bit contribution of the work) is actually necessary.[classification results-NEG], [EMP-NEG]",classification results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8434,"Minor: - Fonts in figure 4 are too small. [Fonts-NEG, figure-NEG], [PNF-NEG]",Fonts,figure,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 8437,"The open-world related tasks have been defined in many previous works.[previous works-NEG], [NOV-NEG]",previous works,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8438,"This paper had made a good survey.[survey-POS], [IMP-POS]",survey,,,,,,IMP,,,,,POS,,,,,,POS,,,, 8439,"The only special point of the open-word classification task defined in this paper is to employ the constraints from the similarity/difference expected for examples from the same class or from different classes.[special point-NEU, paper-NEU], [EMP-NEU]",special point,paper,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8440,"Unfortunately, this paper is lack of novelty.[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8441,"Firstly, the problem context and setting is kinda synthesized.[problem context-NEG], [EMP-NEG]",problem context,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8442,"I cannot quite imagine in what kind of applications we can get ""a set of pairs of intra-class (same class) examples, and the negative training data consists of a set of pairs of inter-class"".[applications-NEG], [IMP-NEG]",applications,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 8443,"Secondly, this model is just a direct combination of the recent powerful algorithms such as DOC and other simple traditional models.[model-NEG], [NOV-NEG]",model,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8444,"I do not really see enough novelty here.[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 8445,"Thirdly, the experiments are only on the MNIST and EMNIST; still not quite sure any real-world problems/datasets can be used to validate this approach.[experiments-NEG, problems/datasets-NEG, approach-NEG], [EMP-NEG]",experiments,problems/datasets,approach,,,,EMP,,,,,NEG,NEG,NEG,,,,NEG,,,, 8446,"I also cannot see the promising performance.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8447,"The clustering results of rejected examples are still far from the ground truth, and comparing the result with a total unsupervised K-means is a kind of unreasonable.[results-NEG, result-NEG], [EMP-NEG]]",results,result,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8453,"Strengths - Use of graph neural nets for few-shot learning is novel.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 8454,"- Introduces novel semi-supervised and active learning variants of few-shot classification.[null], [NOV-POS]",null,,,,,,NOV,,,,,,,,,,,POS,,,, 8455,"Weaknesses - Improvement in accuracy is small relative to previous work.[accuracy-NEG], [EMP-NEG]",accuracy,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8456,"- Writing seems to be rushed.[Writing-NEG], [CLA-NEG]",Writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8457,"The originality of applying graph neural networks to the problem of few-shot learning and proposing semi-supervised and active learning variants of the task are the primary strengths of this paper.[originality-POS], [NOV-POS]",originality,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8458,"Graph neural nets seem to be a more natural way of representing sets of items, as opposed to previous approaches that rely on a random ordering of the labeled set, such as the FCE variant of Matching Networks or TCML.[null], [NOV-POS, EMP-POS]",null,,,,,,NOV,EMP,,,,,,,,,,POS,POS,,, 8459,"Others will likely leverage graph neural net ideas to further tackle few-shot learning problems in the future, and this paper represents a first step in that direction.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 8460,"Regarding the graph, I am wondering if the authors can comment on what scenarios is the graph structure expected to help?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8461,"In the case of 1-shot, the graph can only propagate information about other classes, which seems to not be very useful.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8463,"the motivation behind the semi-supervised and active learning setup could use some elaboration.[motivation-NEU], [SUB-NEU]",motivation,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8464,"By including unlabeled examples in an episode, it is already known that they belong to one of the K classes.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8465,"How realistic is this set-up and in what application is it expected that this will show up?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8466,"For active learning, the proposed method seems to be specific to the case of obtaining a single label.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8467,"How can the proposed method be scaled to handle multiple requested labels?[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8468,"Overall the paper is well-structured and related work covers the relevant papers,[paper-POS, related work-POS], [PNF-POS, CMP-POS]",paper,related work,,,,,PNF,CMP,,,,POS,POS,,,,,POS,POS,,, 8469,"but the details of the paper seem hastily written.[details-NEG], [CLA-NEG]",details,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8470,"In the problem set-up section, it is not immediately clear what the distinction between s, r, and t is.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8471,"Stating more explicitly that s is for the labeled data, etc. would make this section easier to follow.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8472,"In addition, I would suggest stating the reason why t 1 is a necessary assumption for the proposed model in the few-shot and semi-supervised cases.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8474,"Was the same procedure done for the experiments in the paper?[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8475,"If yes, please update 6.1.1 to make this distinction more clear.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8476,"If not, please update the experiments to be consistent with the baselines.[experiments-NEU, baselines-NEU], [CMP-NEU, EMP-NEU]",experiments,baselines,,,,,CMP,EMP,,,,NEU,NEU,,,,,NEU,NEU,,, 8477,"In the experiments, does the varphi MLP explicitly enforce symmetry and identity or is it learned?[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8480,"The results for Prototypical Networks appear to be incorrect in the Omniglot and Mini-Imagenet tables.[results-NEG], [EMP-NEG]",results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8481,"According to Snell et al. (2017) they should be 49.4% and 68.2% for miniImagenet.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8482,"Moreover, Snell et al. (2017) only used 64 classes for training instead of 80 as utilized in the proposed approach.[proposed approach-NEU], [EMP-NEG]",proposed approach,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8483,"Given this, I am wondering if the authors can comment on the performance difference in the 5-shot case, even though Prototypical Networks is a special case of GNNs?[performance-NEU], [CMP-NEU]",performance,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8484,"For semi-supervised and active-learning results, please include error bars for the miniImagenet results.[results-NEU], [EMP-NEU]",results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8485,"Also, it would be interesting to see 20-way results for Omniglot as the gap between the proposed method and the baseline would potentially be wider.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8486,"Other Comments: - In Section 4.2, Gc(.) is defined in Equation 2 but not mentioned in the text.[text-NEG], [PNF-NEG]",text,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8487,"- In Section 4.3, adding an equation to clarify the relationship with Matching Networks would be helpful.[Section-NEU, equation-NEU], [EMP-NEU]",Section,equation,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8489,"This is an interesting idea, and written clearly.[idea-POS], [CLA-POS]",idea,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8490,"The experiments with Baird's and CartPole were both convincing as preliminary evidence that this could be effective. [experiments-POS], [EMP-POS]",experiments,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8491,"However, it is very hard to generalize from these toy problems.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8492,"First, we really need a more thorough analysis of what this does to the learning dynamics itself.[analysis-NEU], [EMP-NEU, SUB-NEU]",analysis,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 8493,"Baring theoretical results, you could analyze the changes to the value function at the current and next state with and without the constraint to illustrate the effects more directly.[theoretical results-NEU], [EMP-NEU]",theoretical results,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8494,"I think ideally, I would want to see this on Atari or some of the continuous control domains often used.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8495,"If this allows the removing of the target network for instance, in those more difficult tasks, then this would be a huge deal.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8496,"Additionally, I do not think the current gridworld task adds anything to the experiments, I would rather actually see this on a more interesting linear function approximation on some other simple task like Mountain Car than a neural network on gridworld.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8497,"The reason this might be interesting is that when the parameter space is lower dimensional (not an issue for neural nets, but could be problematic for linear FA) the constraint might be too much leading to significantly poorer performance. [performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8498,"I suspect this is the actual cause for it not converging to zero for Baird's, although please correct me if I'm wrong on that.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8499,"As is, I cannot recommend acceptance given the current experiments and lack of theoretical results.[acceptance-NEG, experiments-NEG, theoretical results-NEG], [REC-NEG, SUB-NEG]",acceptance,experiments,theoretical results,,,,REC,SUB,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 8500,"But I do think this is a very interesting direction and hope to see more thorough experiments or analysis to support it.[experiments-NEU, analysis-NEU], [SUB-NEU]",experiments,analysis,,,,,SUB,,,,,NEU,NEU,,,,,NEU,,,, 8501,"Pros: Simple, interesting idea Works well on toy problems, and able to prevent divergence in Baird's counter-example[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8502,"Cons: Lacking in theoretical analysis or significant experimental results.[theoretical analysis-NEG, experimental results-NEG], [IMP-NEG, SUB-NEG]",theoretical analysis,experimental results,,,,,IMP,SUB,,,,NEG,NEG,,,,,NEG,NEG,,, 8508,"This paper targets at a potentially very useful application of neural networks that can have real world impacts.[paper-POS], [IMP-POS]",paper,,,,,,IMP,,,,,POS,,,,,,POS,,,, 8509,"However, I have three main concerns: 1) Presentation. The organization of the paper could be improved. [Presentation-NEU], [PNF-NEU]",Presentation,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8510,"It mixes the method, the heat sink example and the airfoil example throughout the entire paper.[method-NEG], [PNF-NEG]",method,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8511,"Sometimes I am very confused about what is being described.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8512,"My suggestion would be to completely separate these three parts: present a general method first, then use heat sink as the first experiment and airfoil as the second experiment.[parts-NEU], [EMP-NEU]",parts,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8513,"This organization would make the writing much clearer.[writing-NEU], [CLA-NEU]",writing,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8514,"2) In the paragraph above Section 4.1, the paper made two arguments. I might be wrong, but I do not agree with either of them in general. [Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8515,"First of all, eural networks are good at generalizing to examples outside their train set. This depends entirely on whether the sample distribution of training and testing are similar and whether you have enough training examples that cover important sample space.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8516,"This is especially critical if a deep neural network is used since overfitting is a real issue.[issue-NEU], [EMP-NEU]",issue,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8517,"Second, it is easy to imagine a hybrid system where a network is trained on a simulation and fine tuned .... Implementing such a hybrid system is nontrivial due to the reality gap.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8519,"So I am not convinced by these two arguments made by this paper.[arguments-NEG], [EMP-NEG]",arguments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8520,"They might be true for a narrow field of application. But in general, I think they are not quite correct.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8521,"3) The key of this paper is to approximate the dynamics using neural network (which is a continuous mapping) and take advantage of its gradient computation.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8522,"However, many of dynamic systems are inherently discontinuous (collision/contact dynamics) or chaotic (turbulent flow).[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8523,"In those scenarios, the proposed method might not work well and we may have to resort to the gradient free methods. [proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8524,"It seems that the proposed method works well for heat sink problem and the steady flow around airfoil, both of which do not fall into the more complex physics regime.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8525,"It would be great that the paper could be more explicit about its limitations.[limitations-NEU], [EMP-NEU]",limitations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8527,"The writing could be improved.[writing-NEU], [CLA-NEU]",writing,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8528,"But more importantly, I think that the proposed method has its limitation about what kind of physical systems it can model.[proposed method-NEU], [EMP-NEU]",proposed method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8529,"These limitation should be discussed more explicitly and more thoroughly.[limitation-NEU], [EMP-NEU, SUB-NEU]",limitation,,,,,,EMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 8532,"The idea is interesting and novel that PACT has not been applied to compressing networks in the past.[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8534,"The experiments in this paper is also solid and has done extensive experiments on state of the art datasets and networks.[experiments-POS], [SUB-POS]",experiments,,,,,,SUB,,,,,POS,,,,,,POS,,,, 8536,"Overall the paper is a descent one, but with limited novelty.[paper-NEU], [NOV-NEU]",paper,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 8537,"I am a weak reject[null], [REC-NEG]",null,,,,,,REC,,,,,,,,,,,NEG,,,, 8542,"** REVIEW SUMMARY ** The paper is readable but it could be more fluent.[paper-NEU], [CLA-NEU]",paper,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8543,"It lacks a few references and important technical aspects are not discussed.[references-NEG, technical aspects-NEG], [SUB-NEG]",references,technical aspects,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 8545,"Empirical contribution seems inflated on omniglot as the authors omit other papers reporting better results.[empirical contribution-NEG, results-NEG], [EMP-NEG]",empirical contribution,results,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8546,"Overall, the contribution is modest at best.v[contribution-NEG], [EMP-NEG]",contribution,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8547,"** DETAILED REVIEW ** On mistakes, it is wrong to say that an SVM is a parameterless classifier.[mistakes-NEG], [EMP-NEG]",mistakes,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8548,"It is wrong to cite (Boser et al 92) for the soft-margin SVM.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8549,"I think slack variables come from (Cortes et al 95).[variables-NEU], [EMP-NEU]",variables,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8550,"consistent has a specific definition in machine learning https://en.wikipedia.org/wiki/Consistent_estimator , you must use a different word in 3.2.[word-NEG, meaning-NEG], [CLA-NEG]",word,meaning,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 8551,"You mention that a non linear SVM need a similarity measure, it actually need a positive definite kernel which has a specific definition, https://en.wikipedia.org/wiki/Positive-definite_kernel .[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8552,"On incompleteness, it is not obvious how the classifier is used at test time.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8553,"Could you explain how classes are predicted given a test problem?[test-NEU], [EMP-NEU]",test,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8554,"The setup of the experiments on TIMIT is extremely unclear.[experiments-NEG], [CLA-NEG]",experiments,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8555,"What are the class you are interested in?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8556,"How many classes and examples does the testing problems have?[testing problems-NEU], [EMP-NEU]",testing problems,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8557,"On clarity, I do not understand why you talk again about non-linear SVM in the last paragraph of 3.2. since you mention at the end of page 4 that you will only rely on linear SVMs for computational reasons.[paragraph-NEG, page-NEG], [CLA-NEG]",paragraph,page,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 8558,"You need to mention explicitely somewhere that (w,theta) are optimized jointly.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8559,"The sentence this paper investigates only the one versus rest approach is confusing, as you have only two classes from the SVM perspective i.e. pairs (x1,x2) where both examples come from the same class and pairs (x1,x2) where they come from different class.[sentence-NEG, approach-NEG], [EMP-NEG]",sentence,approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8561,"You need to find a better justification for using L2-SVM than L2-SVM loss variant is considered to be the best by the author of the paper, did you try classical SVM and found them performing worse?[justification-NEG, paper-NEU], [EMP-NEG]",justification,paper,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 8562,"Also could you motivate your choice for L1 norm as opposed to L2 in Eq 3?[Eq-NEU], [CMP-NEU]",Eq,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8563,"On empirical evaluation, I already mentioned that it impossible to understand what the classification problem on TIMIT is.[empirical evaluation-NEG], [EMP-NEG]",empirical evaluation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8567,"and the reference therein give a few more recent baselines than your table.[baselines-NEG, table-NEG], [SUB-NEG, CMP-NEG]",baselines,table,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,, 8568,"Some of the results are better than your approach.[results-NEG, approach-NEG], [EMP-NEG]",results,approach,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8569,"I am not sure why you do not evaluate on mini-imagenet as well as most work on few shot learning generally do.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8570,"This dataset offers a clearer experimental setup than your TIMIT setting and has abundant published baseline results.[experimental setup-NEG, baseline results-NEG], [EMP-NEG]",experimental setup,baseline results,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8571,"Also, most work typically use omniglot as a proof of concept and consider mini-imagenet as a more challenging set.[work-NEU], [EMP-NEU]",work,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8576,"If that claim to originality is not contested, and the authors provide additional assurances to confirm the correctness of the implementations used for baseline models, this article fills an important gap in open-domain dialogue research and suggests a fruitful future for structured prediction in deep learning-based dialogue systems.[claim-NEU, implementations-NEU], [NOV-NEU, SUB-NEU, IMP-NEU]",claim,implementations,,,,,NOV,SUB,IMP,,,NEU,NEU,,,,,NEU,NEU,NEU,, 8577,"Some points: 1. The introduction uses scalability throughout to mean something closer to ability to generalize. Consider revising the wording here.[introduction-NEU], [CLA-NEU]",introduction,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8578,"2. The dialogue act tag set used in the paper is not original to Ivanovic (2005) but derives, with modifications, from the tag set constructed for the DAMSL project (Jurafsky et al., 1997; Stolcke et al., 2000).[paper-NEU], [NOV-NEG]",paper,,,,,,NOV,,,,,NEU,,,,,,NEG,,,, 8579,"It's probably worth citing some of this early work that pioneered the use of dialogue acts in NLP, since they discuss motivations for building DA corpora.[early work-NEU], [CMP-NEU, SUB-NEU]",early work,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 8580,"3. In Section 2.1, the authors don't explicitly mention existing DA-annotated corpora or discuss specifically why they are not sufficient (is there e.g. a dataset that would be ideal for the purposes of this paper except that it isn't large enough?)[Section-NEG], [SUB-NEG, EMP-NEG]",Section,,,,,,SUB,EMP,,,,NEG,,,,,,NEG,NEG,,, 8581,"3. The authors appear to consider only one option (selecting the top predicted dialogue act, then conditioning the response generator on this DA) among many for inference-time search over the joint DA-response space.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8582,"A more comprehensive search strategy (e.g. selecting the top K dialogue acts, then evaluating several responses for each DA) might lead to higher response diversity.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8583,"4. The description of the RL approach in Section 3.2 was fairly terse and included a number of ad-hoc choices.[description-NEG], [EMP-NEG]",description,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8584,"If these choices (like the dialogue termination conditions) are motivated by previous work, they should be cited.[previous work-NEU], [CMP-NEU]",previous work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8585,"Examples (perhaps in the appendix) might also be helpful for the reader to understand that the chosen termination conditions or relevance metrics are reasonable.[Examples-NEU], [SUB-NEU]",Examples,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8586,"5. The comparison against previous work is missing some assurances I'd like to see.[comparison-NEG], [CMP-NEG, SUB-NEG]",comparison,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 8587,"While directly citing the codebases you used or built off of is fantastic, it's also important to give the reader confidence that the implementations you're comparing to are the same as those used in the original papers, such as by mentioning that you can replicate or confirm quantitative results from the papers you're comparing to.[null], [IMP-NEU]",null,,,,,,IMP,,,,,,,,,,,NEU,,,, 8588,"Without that there could always be the chance that something is missing from the implementation of e.g. RL-S2S that you're using for comparison.[implementation-NEG], [SUB-NEG]",implementation,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8589,"6. Table 5 is not described in the main text, so it isn't clear what the different potential outputs of e.g. the RL-DAGM system result from (my guess: conditioning the response generation on the top 3 predicted dialogue acts?)[Table-NEG], [EMP-NEG]",Table,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8590,"7. A simple way to improve the paper's clarity for readers would be to break up some of the very long paragraphs, especially in later sections.[clarity-NEU], [CLA-NEU, PNF-NEU]",clarity,,,,,,CLA,PNF,,,,NEU,,,,,,NEU,NEU,,, 8591,"It's fine if that pushes the paper somewhat over the 8th page.[paper-NEU], [PNF-NEU]",paper,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8592,"8. A consistent focus on human evaluation, as found in this paper, is probably the right approach for contemporary dialogue research.[evaluation-POS], [EMP-POS]",evaluation,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8593,"9. The examples provided in the appendix are great.[appendix-POS], [SUB-POS]",appendix,,,,,,SUB,,,,,POS,,,,,,POS,,,, 8598,"I think the methodology presented in this paper is neat and the experimental results are encouraging.[methodology-POS, experimental results-POS], [CLA-POS, EMP-POS]",methodology,experimental results,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 8599,"However, I do have some comments on the presentation of the paper: 1. Using power method to approximate matrix largest singular value is a very old idea, and I think the authors should cite some more classical references in addition to (Yoshida and Miyato).[presentation-NEG, references-NEU], [NOV-NEG, PNF-NEG, CMP-NEU]",presentation,references,,,,,NOV,PNF,CMP,,,NEG,NEU,,,,,NEG,NEG,NEU,, 8600,"For example, Matrix Analysis, book by Bhatia Matrix computation, book by Golub and Van Loan. Some recent work in theory of (noisy) power method might also be helpful and should be cited, for example, https://arxiv.org/abs/1311.2495 2.[recent work-NEU], [CMP-NEU]",recent work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8601,"I think the matrix spectral norm is not really differentiable; hence the gradients the authors calculate in the paper should really be subgradients.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8603,"3. It should be noted that even with the product of gradient norm, the resulting normalizer is still only an upper bound on the actual Lipschitz constant of the discriminator.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8604,"Can the authors give some empirical evidence showing that this approximation is much better than previous approximations, such as L2 norms of gradient rows which appear to be much easier to optimize?[empirical evidence-NEU], [SUB-NEU, EMP-NEU]",empirical evidence,,,,,,SUB,EMP,,,,NEU,,,,,,NEU,NEU,,, 8606,"This paper is well written and easy to follow.[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 8610,"PixelDCL is applied sequentially, therefore it is slower than the original deconvolutional layer.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8612,"The authors justify the proposed method as a way to alleviate the checkerboard effect (while introducing more complexity to the model and making it slower).[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8613,"In the experimental section, however, they do not compare with other approaches to do so For example, the upsampling+conv approach, which has been shown to remove the checkerboard effect while being more efficient than the proposed method (as it does not require any sequential computation).[experimental section-NEG, approaches-NEU], [CMP-NEG]",experimental section,approaches,,,,,CMP,,,,,NEG,NEU,,,,,NEG,,,, 8614,"Moreover, the PixelDCL does not seem to bring substantial improvements on DeepLab (a state-of-the-art semantic segmentation algorithm). [improvements-NEG], [EMP-NEG]",improvements,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8616,"Why no performance boost?[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8617,"Is it because of the residual connection?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8618,"Or other component of DeepLab?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8619,"Is the proposed layer really useful once a powerful model is used?[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8620,"I also think the experiments on VAE are not conclusive.[experiments-NEG], [EMP-NEG]",experiments,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8621,"The authors simply show set of generated images.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8622,"First, it is difficult to see the different of the image generated using deconv and PixelDCL.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8623,"Second, a set of 20 qualitative images does not (and cannot) validate any research idea.[idea-NEG], [EMP-NEG]",idea,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8629,"However, I think this paper has limited contribution and novelty,[contribution-NEU, novelty-NEU], [IMP-NEU, NOV-NEU]",contribution,novelty,,,,,IMP,NOV,,,,NEU,NEU,,,,,NEU,NEU,,, 8630,"and the experiments also need to be improved[experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8631,". The detailed comments are as follows: - The main contribution of this paper is to apply word pairs instead of words to RBM models[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8632,". However, the main techniques such as RBM, parser to extract word pairs, tf-idf for filtering, and k-means for clustering, are all existing standard techniques.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8633,"It is more like an application of these methods, and has limited contribution and novelty.[novelty-NEU], [NOV-NEU]",novelty,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 8634,"- For experiments, they apply k-means clustering in the process so k is one parameter to tune. K needs to be tuned on validation set instead of testing set. [experiments-NEU], [EMP-NEU]",experiments,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8635,"This paper simply presents the results of different parameter k on testing set directly.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8636,"- The structure of Section 3 needs to be improved.[Section-NEU], [PNF-NEU]",Section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8637,"Instead of listing each step in each subsection, a general introduction picture should be introduced first. More intuition is also needed for each step.[intuition-NEU], [SUB-NEU]",intuition,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8638,"- Some figures and tables are overlapping in the experiments. Just keep one is enough.[figures-NEG, tables-NEG], [PNF-NEG]",figures,tables,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 8639,"- The format of reference should be fixed in this paper.[format-NEG], [PNF-NEG]",format,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8641,"Quality: The work focuses on a novel problem of generating text sample using GAN and a novel in-filling mechanism of words.[problem-POS], [NOV-POS]",problem,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8642,"Using GAN to generate samples in adversarial setup in texts has been limited due to the mode collapse and training instability issues.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8644,"But, the use of the rewards at every time step (RL mechanism) to employ the actor-critic training procedure could be challenging computationally challenging.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8645,"Clarity: The mechanism of generating the text samples using the proposed methodology has been described clearly.[proposed methodology-POS], [EMP-POS]",proposed methodology,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8646,"However the description of the reinforcement learning step could have been made a bit more clear.[description-NEU], [CLA-NEU]",description,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8647,"Originality: The work indeed use a novel mechanism of in-filling via a conditioning approach to overcome the difficulties of GAN training in text settings. [work-POS], [NOV-POS]",work,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8649,"How this current work compares with the existing such literature?[literature-NEU], [CMP-NEU]",literature,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8650,"Significance: The research problem is indeed significant since the use of GAN in generating adversarial examples in image analysis has been more prevalent compared to text settings.[problem-POS], [IMP-POS]",problem,,,,,,IMP,,,,,POS,,,,,,POS,,,, 8651,"Also, the proposed actor-critic training procedure via RL methodology is indeed significant from its application in natural language processing.[procedure-POS], [IMP-POS]",procedure,,,,,,IMP,,,,,POS,,,,,,POS,,,, 8652,"pros: (a) Human evaluations applications to several datasets show the usefulness of MaskGen over the maximum likelihood trained model in generating more realistic text samples.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8653,"(b) Using a novel in-filling procedure to overcome the complexities in GAN training.[procedure-POS], [NOV-POS]",procedure,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8654,"(c) generation of high quality samples even with higher perplexity on ground truth set.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8655,"cons: (a) Use of rewards at every time step to the actor-critic training procure could be computationally expensive.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8656,"(b) How to overcome the situation where in-filling might introduce implausible text sequences with respect to the surrounding words?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8657,"(c) Depending on the Mask quality GAN can produce low quality samples.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8665,"The contribution from the RL perspective is limited, in the sense that the authors simply applied standard models to predict a bunch of labels (in this case, emotion labels).[contribution-NEG, models-NEG], [SUB-NEG, IMP-NEG]",contribution,models,,,,,SUB,IMP,,,,NEG,NEG,,,,,NEG,NEG,,, 8666,"It is interesting the psychological analysis that the authors present in Section 6.[analysis-POS, Section-NEU], [EMP-POS]",analysis,Section,,,,,EMP,,,,,POS,NEU,,,,,POS,,,, 8668,"I think the author's statement on that this study leads to a more plausible psychological model of emotion is not well founded (they also mention to learn to recognize the latent emotional state).[study-NEG], [EMP-NEG]",study,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8671,"The main difference from previous approaches is that the model is that the embeddings are trained end-to-end for a specific task, rather than trying to produce generically useful embeddings.[approaches-NEU, task-NEU], [CMP-NEU]",approaches,task,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 8672,"The method leads to better performance than using no external resources,[method-POS, performance-POS], [EMP-POS]",method,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 8673,"but not as high performance as using Glove embeddings.[performance-NEG], [CMP-NEG]",performance,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8674,"The paper is clearly written, and has useful ablation experiments.[paper-POS, experiments-POS], [CLA-POS, EMP-POS]",paper,experiments,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 8675,"However, I have a couple of questions/concerns: - Most of the gains seem to come from using the spelling of the word.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8676,"As the authors note, this kind of character level modelling has been used in many previous works.[modelling-NEG, works-NEG], [CMP-NEG]",modelling,works,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 8677,"- I would be slightly surprised if no previous work has used external resources for training word representations using an end-task loss,[previous work-NEU], [CMP-NEU]",previous work,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8679,"- I'm a little skeptical about how often this method would really be useful in practice.[method-NEG], [EMP-NEG]",method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8680,"It seems to assume that you don't have much unlabelled text (or you'd use Glove), but you probably need a large labelled dataset to learn how to read dictionary definitions well.[labelled dataset-NEG, unlabelled text-NEG], [SUB-NEG]",labelled dataset,unlabelled text,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 8681,"All the experiments use large tasks - it would be helpful to have an experiment showing an improvement over character-level modelling on a smaller task.[experiment-NEG], [EMP-NEG]",experiment,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8682,"- The results on SQUAD seem pretty weak - 52-64%, compared to the SOTA of 81.[results-NEG], [CMP-NEG, EMP-NEG]",results,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEG,,, 8683,"It seems like the proposed method is quite generic, so why not apply it to a stronger baseline? [method-NEG, baseline-NEG], [EMP-NEG]]",method,baseline,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 8688,"Main comments: - The idea of building 3D adversarial objects is novel so the study is interesting.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8689,"However, the paper is incomplete, with a very low number of references, only 2 conference papers if we assume the list is up to date.[references-NEG], [SUB-NEG, CMP-NEG]",references,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 8691,"- The presentation of the results is not very clear.[presentation-NEG, results-NEG], [PNF-NEG]",presentation,results,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 8692,"See specific comments below. - It would be nice to include insights to improve neural nets to become less sensitive to these attacks.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8693,"Minor comments: Fig1 : a bug with color seems to have been fixed Model section: be consistent with the notations.[Fig1-NEU, notations-NEG], [EMP-NEG, PNF-NEG]",Fig1,notations,,,,,EMP,PNF,,,,NEU,NEG,,,,,NEG,NEG,,, 8694,"Bold everywhere or nowhere[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 8695,"Results: The tables are difficult to read and should be clarified:[tables-NEG], [PNF-NEG]",tables,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8696,"What does the l2 metric stands for ? [null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 8697,"How about min, max ?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 8698,"Accuracy -> classification accuracy[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 8699,"Models -> 3D models Describe each metric (Adversarial, Miss-classified, Correct) [null], [PNF-NEU, SUB-NEU]",null,,,,,,PNF,SUB,,,,,,,,,,NEU,NEU,,, 8701,"The paper falls far short of the standard expected of an ICLR submission.[paper-NEG], [APR-NEG]",paper,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 8702,"The paper has little to no content.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8703,"There are large sections of blank page throughout.[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 8704,"The algorithm, iterative temporal differencing, is introduced in a figure -- there is no formal description.[description-NEG, figure-NEU], [CLA-NEG, SUB-NEG]",description,figure,,,,,CLA,SUB,,,,NEG,NEU,,,,,NEG,NEG,,, 8707,"The paper over-uses acronyms; sentences like ""In this figure, VBP, VBP with FBA, and ITD using FBA for VBP..."" are painful to read.[paper-NEG], [PNF-NEG]",paper,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8712,"The experimental results show that the propped model outperforms tree-lstm using external parsers.[experimental results-POS, propped model-POS], [EMP-POS]",experimental results,propped model,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 8713,"Comment: I kinda like the idea of using chart, and the attention over chart cells.[chart-POS], [PNF-POS]",chart,,,,,,PNF,,,,,POS,,,,,,POS,,,, 8714,"The paper is very well written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8715,"- My only concern about the novelty of the paper is that the idea of using CYK chart-based mechanism is already explored in Le and Zuidema (2015).[paper-NEG], [NOV-NEG]",paper,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 8716,"- Le and Zudema use pooling and this paper uses weighted sum.[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8717,"Any differences in terms of theory and experiment?[theory-NEU, experiment-NEU], [EMP-NEU]",theory,experiment,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8718,"- I like the new attention over chart cells.[chart-POS], [EMP-POS]",chart,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8719,"But I was surprised that the authors didn't use it in the second experiment (reverse dictionary).[experiment-NEG], [EMP-NEG]",experiment,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8720,"- In table 2, it is difficult for me to see if the difference between unsupervised tree-lstm and right-branching tree-lstm (0.3%) is ""good enough"".[table-NEG], [PNF-NEG]",table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8721,"In which cases the former did correctly but the latter didn't?[cases-NEG], [EMP-NEG]",cases,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8722,"- In table 3, what if we use the right-branching tree-lstm with attention?[table-NEU], [EMP-NEU]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8723,"- In table 4, why do Hill et al lstm and bow perform much better than the others?[table-NEU], [EMP-NEU]]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8726,"In some domains this can be a much better approach and this is supported by experimentation.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8728,"- Efficient exploration is a big problem for deep reinforcement learning (epsilon-greedy or Boltzmann is the de-facto baseline) and there are clearly some examples where this approach does much better.[approach-POS], [EMP-POS]",approach,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8729,"- The noise-scaling approach is (to my knowledge) novel, good and in my view the most valuable part of the paper.[approach-POS], [NOV-POS]",approach,,,,,,NOV,,,,,POS,,,,,,POS,,,, 8730,"- This is clearly a very practical and extensible idea... the authors present good results on a whole suite of tasks.[idea-POS, results-POS], [EMP-POS]",idea,results,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 8731,"- The paper is clear and well written, it has a narrative and the plots/experiments tend to back this up.[paper-POS], [CLA-POS, EMP-POS]",paper,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 8732,"- I like the algorithm, it's pretty simple/clean and there's something obviously *right* about it (in SOME circumstances).[algorithm-POS], [EMP-POS]",algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8734,"- At many points in the paper the claims are quite overstated.[claims-NEG], [EMP-NEG]",claims,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8735,"Parameter noise on the policy won't necessarily get you efficient exploration... and in some cases it can even be *worse* than epsilon-greedy... if you just read this paper you might think that this was a truly general statistically efficient method for exploration (in the style of UCRL or even E^3/Rmax etc).[null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 8736,"- For instance, the example in 4.2 only works because the optimal solution is to go right in every timestep... if you had the network parameterized in a different way (or the actions left/right were relabelled) then this parameter noise approach would *not* work...[example-NEG], [EMP-NEG]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8738,"I think the claim/motivation for this example in the bootstrapped DQN paper is more along the lines of deep exploration and you should be clear that your parameter noise does *not* address this issue.[claim-NEU], [CLA-NEU]",claim,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 8739,"- That said I think that the example in 4.2 is *great* to include... you just need to be more upfront about how/why it works and what you are banking on with the parameter-space exploration.[example-POS], [EMP-NEU]",example,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 8740,"Essentially you perform a local exploration rule in parameter space... and sometimes this is great -[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8741,"but you should be careful to distinguish this type of method from other approaches.[method-NEU], [EMP-NEU]",method,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8742,"This must be mentioned in section 4.2 does parameter space noise explore efficiently because the answer you seem to imply is yes ... when the answer is clearly NOT IN GENERAL... but it can still be good sometimes ;D[section-NEU], [PNF-NEU]",section,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8744,"I can't really support the conclusion RL with parameter noise exploration learns more efficiently than both RL and evolutionary strategies individually.[conclusion-NEG], [EMP-NEG]",conclusion,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8745,"This sort of sentence is clearly wrong and for many separate reasons: - Parameter noise exploration is not a separate/new thing from RL... it's even been around for ages! It feels like you are talking about DQN/A3C/(whatever algorithm got good scores in Atari last year) as RL and that's just really not a good way to think about it.[sentence-NEG], [CMP-NEG, EMP-NEG]",sentence,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEG,,, 8746,"- Parameter noise exploration can be *extremely* bad relative to efficient exploration methods (see section 2.4.3 https://searchworks.stanford.edu/view/11891201)[section-NEG], [CMP-NEG]",section,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8747,"Overall, I like the paper, I like the algorithm and I think it is a valuable contribution.[contribution-POS], [EMP-POS]",contribution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8749,"In some (maybe even many of the ones you actually care about) settings this can be a really great approach, especially when compared to epsilon-greedy.[approach-POS], [CMP-POS, EMP-POS]",approach,,,,,,CMP,EMP,,,,POS,,,,,,POS,POS,,, 8751,"You shouldn't claim such a universal revolution to exploration / RL / evolution because I don't think that it's correct.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8752,"Further, I don't think that clarifying that this method is *not* universal/general really hurts the paper... you could just add a section in 4.2 pointing out that the chain example wouldn't work if you needed to do different actions at each timestep (this algorithm does *not* perform deep exploration).[method-NEU, section-NEU], [EMP-NEU]",method,section,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8756,"Review: The paper is clearly written.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8757,"It is sometimes difficult to communicate ideas in this area, so I appreciate the author's effort in choosing good notation.[notation-POS], [PNF-POS]",notation,,,,,,PNF,,,,,POS,,,,,,POS,,,, 8758,"Using an architecture to learn how to split the input, find solutions, then merge these is novel.[architecture-POS, solutions-POS, novel-POS], [NOV-POS]",architecture,solutions,novel,,,,NOV,,,,,POS,POS,POS,,,,POS,,,, 8760,"The ideas and formalism of the merge and partition operations are valuable contributions.[ideas-POS, contributions-POS], [EMP-POS, IMP-POS]",ideas,contributions,,,,,EMP,IMP,,,,POS,POS,,,,,POS,POS,,, 8761,"The experimental side of the paper is less strong.[experimental side-NEG], [EMP-NEG]",experimental side,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8762,"There are good results on the convex hull problem, which is promising.[results-POS], [EMP-POS]",results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8763,"There should also be a comparison to a k-means solver in the k-means section as an additional baseline.[comparison-NEU], [SUB-NEG, CMP-NEG]",comparison,,,,,,SUB,CMP,,,,NEU,,,,,,NEG,NEG,,, 8764,"I'm also not sure TSP is an appropriate problem to demonstrate the method's effectiveness.[problem-NEU], [EMP-POS]",problem,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 8765,"Perhaps another problem that has an explicit divide and conquer strategy could be used instead.[problem-NEU], [SUB-NEU]",problem,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8766,"It would also be nice to observe failure cases of the model.[model-NEU], [SUB-NEU]",model,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8767,"This could be done by visually showing the partition constructed or seeing how the model learned to merge solutions..[model-NEU, solutions-NEU], [EMP-NEU]",model,solutions,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8768,"This is a relatively new area to tackle, so while the experiments section could be strengthened, I think the ideas present in the paper are important and worth publishing.[experiments section-NEU, ideas-POS, paper-POS], [EMP-POS]",experiments section,ideas,paper,,,,EMP,,,,,NEU,POS,POS,,,,POS,,,, 8773,"Typos: 1. Author's names should be enclosed in parentheses unless part of the sentence.[Typos-NEG], [CLA-NEG]",Typos,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8774,"2. I believe then should be removed in the sentence ...scale invariance, then exploiting... on page 2.[page-NEG], [CLA-NEG]",page,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8776,"The topic is interesting however the description in the paper is lacking clarity.[topic-POS, description-NEG], [CLA-NEG]",topic,description,,,,,CLA,,,,,POS,NEG,,,,,NEG,,,, 8777,"The paper is written in a procedural fashion - I first did that, then I did that and after that I did third.[paper-NEU], [PNF-NEU]",paper,,,,,,PNF,,,,,NEU,,,,,,NEU,,,, 8778,"Having proper mathematical description and good diagrams of what you doing would have immensely helped.[description-NEU], [EMP-NEU]",description,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8779,"Another big issue is the lack of proper validation in Section 3.4.[issue-NEG, validation-NEG, Section-NEU], [EMP-NEG]",issue,validation,Section,,,,EMP,,,,,NEG,NEG,NEU,,,,NEG,,,, 8780,"Even if you do not know what metric to use to objectively compare your approach versus baseline there are plenty of fields suffering from a similar problem yet doing subjective evaluations, such as listening tests in speech synthesis.[approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8781,"Given that I see only one example I can not objectively know if your model produces examples like that 'each' time so having just one example is as good as having none. [example-NEG], [SUB-NEG]",example,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8785,"Such simple trick alleviates the effort in tuning stepsize, and can be incorporated with popular stochastic first-order optimization algorithms, including SGD, SGD with Nestrov momentum, and Adam. Surprisingly, it works well in practice.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8786,"Although the theoretical analysis is weak that theorem 1 does not reveal the main reason for the benefits of such trick, considering their performance, I vote for acceptance.[theoretical analysis-NEG, acceptance-POS], [REC-POS, EMP-NEG]",theoretical analysis,acceptance,,,,,REC,EMP,,,,NEG,POS,,,,,POS,NEG,,, 8788,"1, the derivation of the update of alpha relies on the expectation formulation.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8789,"I would like to see the investigation of the effect of the size of minibatch to reveal the variance of the gradient in the algorithm combined with such trick.[investigation-NEU], [EMP-NEU]",investigation,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8790,"2, The derivation of the multiplicative rule of HD relies on a reference I cannot find. Please include this part for self-containing.[reference-NEU], [SUB-NEU]",reference,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8791,"3, As the authors claimed, the Maclaurin et.al. 2015 is the most related work, however, they are not compared in the experiments.[related work-NEU, experiments-NEG], [CMP-NEG]",related work,experiments,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 8792,"Moreover, the empirical comparisons are only conducted on MNIST.[empirical comparisons-NEG], [CMP-NEG, EMP-NEU]",empirical comparisons,,,,,,CMP,EMP,,,,NEG,,,,,,NEG,NEU,,, 8793,"To be more convincing, it will be good to include such competitor and comparing on practical applications on CIFAR10/100 and ImageNet.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 8794,"Minors: In the experiments results figures, after adding the new trick, the SGD algorithms become more stable, i.e., the variance diminishes.[experiments results-POS], [EMP-POS]",experiments results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8795,"Could you please explain why such phenomenon happens?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8801,"The main issue I am having is what are the applicable insight from the analysis:[analysis-NEU], [IMP-NEU]",analysis,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 8803,"2. Does the result implies that we should make the decision boundary more flat, or curved but on different directions? And how to achieve that?[result-NEU], [EMP-NEU]",result,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8804,"It might be my mis-understanding but from my reading a prescriptive procedure for universal perturbation seems not attained from the results presented.[results-NEU], [EMP-NEG]",results,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8808,"However the corpus the authors choose are quite small,[corpus-NEG], [SUB-NEG]",corpus,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8809,"the variance of the estimate will be quite high, I suspect whether the same conclusions could be drawn[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8810,". It would be more convincing if there are experiments on the billion word corpus or other larger datasets, or at least on a corpus with 50 million tokens.[experiments-NEU], [SUB-NEU]",experiments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8811,"This will use significant resources and is much more difficult,[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8812,"but it's also really valuable, because it's much more close to real world usage of language models.[null], [IMP-POS]",null,,,,,,IMP,,,,,,,,,,,POS,,,, 8813,"And less tuning is needed for these larger datasets.[datasets-NEU], [EMP-NEU]",datasets,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8814,"Finally it's better to do some experiments on machine translation or speech recognition and see how the improvement on BLEU or WER could get. [experiments-NEU], [IMP-NEU]",experiments,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 8817,"Regarding the latter methods: what is described in the paper sounds like competent engineering details that those performing such a task for launch in a real service would figure out how to accomplish, and the specific reported details may or may not represent the 'right' way to go about this versus other choices that might be made.[details-NEU], [EMP-NEU]",details,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8818,"The final threshold for 'successful' speedups feels somewhat arbitrary -- why 16ms in particular? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8819,"In any case, these methods are useful to document, but derive their value mainly from the fact that they allow the use of the completion/correction methods that are the primary contribution of the paper.[contribution-NEU], [EMP-NEU]",contribution,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8820,"While the idea of integrating the spelling error probability into the search for completions is a sound one, the specific details of the model being pursued feel very ad hoc, which diminishes the ultimate impact of these results.[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8821,"Specifically, estimating the log probability to be proportional to the number of edits in the Levenshtein distance is really not the right thing to do at all.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8822,"Under such an approach, the unedited string receives probability one, which doesn't leave much additional probability mass for the other candidates -- not to mention that the number of possible misspellings would require some aggressive normalization. [approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8823,"Even under the assumption that a normalized edit probability is not particularly critical (an issue that was not raised at all in the paper, let alone assessed), the fact is that the assumptions of independent errors and a single substitution cost are grossly invalid in natural language.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8824,"For example, the probability p_1 of 'pkoe' versus p_2 of 'zoze' as likely versions of 'poke' (as, say, the prefix of pokemon, as in your example) should be such that p_1 >>> p_2, not equal as they are in your model.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8825,"Probabilistic models of string distance have been common since Ristad and Yianlios in the late 90s, and there are proper probabilistic models that would work with your same dynamic programming algorithm, as well as improved models with some modest state splitting.[models-NEU], [NOV-NEU]",models,,,,,,NOV,,,,,NEU,,,,,,NEU,,,, 8826,"And even with very simple assumptions some unsupervised training could be used to yield at least a properly normalized model.[model-NEU], [EMP-NEU]",model,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8827,"It may very well end up that your very simple model does as well as a well estimated model, but that is something to establish in your paper, not assume.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8828,"That such shortcomings are not noted in the paper is troublesome, particularly for a conference like ICLR that is focused on learned models, which this is not. [shortcomings-NEG], [APR-NEG]",shortcomings,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 8829,"As the primary contribution of the paper is this method for combining correction with completion, this shortcoming in the paper is pretty serious.[contribution-NEU, shortcoming-NEG], [EMP-NEG]",contribution,shortcoming,,,,,EMP,,,,,NEU,NEG,,,,,NEG,,,, 8830,"Some other comments: Your presentation of completion cost versus edit cost separation in section 3.3 is not particularly clear, partly since the methods are discussed prior to this point as extension of (possibly corrected) prefixes.[presentation-NEG, section-NEG], [PNF-NEG, EMP-NEG]",presentation,section,,,,,PNF,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 8831,"In fact, it seems that your completion model also includes extension of words with end point prior to the end of the prefix -- which doesn't match your prior notation, or, frankly, the way in which the experimental results are described.[experimental results-NEG], [EMP-NEG]",experimental results,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8832,"The notation that you use is a bit sloppy and not everything is introduced in a clear way.[notation-NEG], [PNF-NEG, CLA-NEG]",notation,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 8833,"For example, the s_0:m notation is introduced before indicating that s_i would be the symbol in the i_th position (which you use in section 3.3).[notation-NEG], [CLA-NEG]",notation,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8834,"Also, you claim that s_0 is the empty string, but isn't it more correct to model this symbol as the beginning of string symbol?[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8835,"If not, what is the difference between s_0:m and s_1:m?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8836,"If s_0 is start of string, the s_0:m is of length m+1 not length m.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8838,"(you don't need them, but also why number if you never refer to them later?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 8839,") Also the dynamic programming for Levenshtein is foundational, not required to present that algorithm in detail, unless there is something specific that you need to point out there (which your section 3.3 modification really doesn't require to make that point).[algorithm-NEG], [SUB-NEG]",algorithm,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8840,"Is there a specific use scenario for the prefix splitting, other than for the evaluation of unseen prefixes?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8841,"This doesn't strike me as the most effective way to try to assess the seen/unseen distinction, since, as I understand the procedure, you will end up with very common prefixes alongside less common prefixes in your validation set, which doesn't really correspond to true 'unseen' scenarios.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8843,"You never explicitly mention what your training loss is in section 5.1.[section-NEG], [CLA-NEG]",section,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8844,"Overall, while this is an interesting and important problem, and the engineering details are interesting and reasonably well-motivated, the main contribution of the paper is based on a pretty flawed approach to modeling correction probability, which would limit the ultimate applicability of the methods.[problem-POS, main contribution-NEG], [EMP-NEG]",problem,main contribution,,,,,EMP,,,,,POS,NEG,,,,,NEG,,,, 8850,"The paper is well explained, and it's also nice that the runtime is shown for each of the algorithm blocks.[paper-POS], [CLA-POS, EMP-POS]",paper,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 8851,"Could imagine this work giving nice guidelines for others who also want to run query completion using neural networks.[work-POS], [IMP-POS]",work,,,,,,IMP,,,,,POS,,,,,,POS,,,, 8852,"The final dataset is also a good size (36M search queries).[dataset-POS], [SUB-POS]",dataset,,,,,,SUB,,,,,POS,,,,,,POS,,,, 8853,"My major concerns are perhaps the fit of the paper for ICLR as well as the thoroughness of the final experiments.[experiments-NEU], [APR-NEU]",experiments,,,,,,APR,,,,,NEU,,,,,,NEU,,,, 8854,"Much of the paper provides background on LSTMs and edit distance, which granted, are helpful for explaining the ideas.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8855,"But much of the realtime completion section is also standard practice, e.g. maintaining previous hidden states and grouping together the different gates.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8856,"So the paper feels directed to an audience with less background in neural net LMs.[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 8857,"Secondly, the experiments could have more thorough/stronger baselines.[experiments-NEU, baselines-NEU], [EMP-NEU, CMP-NEU]",experiments,baselines,,,,,EMP,CMP,,,,NEU,NEU,,,,,NEU,NEU,,, 8858,"I don't really see why we would try stochastic search. And expected to see more analysis of how performance was impacted as the number of errors increased, even if errors were introduced artificially, and expected analysis of how different systems scale with varying amounts of data.[analysis-NEU], [EMP-NEG]",analysis,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8859,"The fact that 256 hidden dimension worked best while 512 overfit was also surprising, as character language models on datasets such as Penn Treebank with only 1 million words use hidden states far larger than that for 2 layers.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8864,"The experiments show robustness to these types of noise.[experiments-POS], [EMP-NEU]",experiments,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 8865,"Review: The claim made by the paper is overly general, and in my own experience incorrect when considering real-world-noise.[claim-NEG], [EMP-NEG]",claim,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8866,"This is supported by the literature on data cleaning (partially by the authors), a procedure which is widely acknowledged as critical for good object recognition.[literature-NEU, procedure-NEU], [EMP-NEU]",literature,procedure,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 8867,"While it is true that some image-independent label noise can be alleviated in some datasets, incorrect labels in real world datasets can substantially harm classification accuracy.[datasets-NEU, accuracy-NEU], [EMP-NEG]",datasets,accuracy,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 8868,"It would be interesting to understand the source of the difference between the results in this paper and the more common results (where label noise damages recognition quality).[results-NEU], [EMP-NEU, CMP-NEU]",results,,,,,,EMP,CMP,,,,NEU,,,,,,NEU,NEU,,, 8869,"The paper did not get a chance to test these differences, and I can only raise a few hypotheses.[paper-NEG], [CMP-NEG]",paper,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8870,"First, real-world noise depends on the image and classes in a more structured way. For instance, raters may confuse one bird species from a similar one, when the bird is photographed from a particular angle.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8872,"Another possible reason is that classes in MNIST and CIFAR10 are already very distinctive, so are more robust to noise.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 8873,"Once again, it would be interesting for the paper to study why they achieve robustness to noise while the effect does not hold in general.[paper-NEU], [SUB-NEU]",paper,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 8874,"Without such an analysis, I feel the paper should not be accepted to ICLR because the way it states its claim may mislead readers.[analysis-NEG, paper-NEG], [SUB-NEG, APR-NEG]",analysis,paper,,,,,SUB,APR,,,,NEG,NEG,,,,,NEG,NEG,,, 8875,"Other specific comments: -- Section 3.4 the experimental setup, should clearly state details of the optimization, architecture and hyper parameter search.[Section-NEG, architecture-NEU], [EMP-NEU, CLA-NEG]",Section,architecture,,,,,EMP,CLA,,,,NEG,NEU,,,,,NEU,NEG,,, 8876,"For example, for Conv4, how many channels at each layer?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8877,"how was the net initialized? [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8878,"which hyper parameters were tuned and with which values?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8879,"were hyper parameters tuned on a separate validation set?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8880,"How was the train/val/test split done, etc.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8882,"-- Section 4, importance of large datasets.[Section-NEU], [EMP-POS]",Section,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 8883,"The recent paper by Chen et al (2017) would be relevant here.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 8884,"-- Figure 8 failed to show for me.[Figure-NEG], [PNF-NEG]",Figure,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 8885,"-- Figure 9,10, need to specify which noise model was used. [Figure-NEG, model-NEU], [EMP-NEG]",Figure,model,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 8890,"Naive multitask learning with deep neural networks fails in many practical cases, as covered in the paper. [paper-NEU], [EMP-NEG]",paper,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8891,"The one concern I have is perhaps the choice of distinct of Atari games to multitask learn may be almost adversarial, since naive multitask learning struggles in this case; but in practice, the observed interference can appear even with less visually diverse inputs.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8892,"Although performance is still reduced compared to single task learning in some cases,[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8893,"this paper delivers an important reference point for future work towards achieving generalist agents, which master diverse tasks and represent complementary behaviours compactly at scale.[reference-POS, future work-POS], [IMP-POS]",reference,future work,,,,,IMP,,,,,POS,POS,,,,,POS,,,, 8894,"I wonder how efficient the approach would be on DM lab tasks, which have much more similar visual inputs, but optimal behaviours are still distinct. [approach-NEU], [IMP-NEU]",approach,,,,,,IMP,,,,,NEU,,,,,,NEU,,,, 8900,"** REVIEW SUMMARY ** The paper reads well, has sufficient reference.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 8901,"The idea is simple and well explained.[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8902,"Positive empirial results support the proposed regularizer.[empirial results-POS], [EMP-POS]",empirial results,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8905,"In related work, I would cite co-training approaches.[related work-NEU], [CMP-NEU, SUB-NEU]",related work,,,,,,CMP,SUB,,,,NEU,,,,,,NEU,NEU,,, 8906,"In effect, you have two view of a point in time, its past and its future and you force these two views to agree, see (Blum and Mitchell, 1998) or Xu, Chang, Dacheng Tao, and Chao Xu.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 8907,"A survey on multi-view learning. arXiv preprint arXiv:1304.5634 (2013).[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 8908,"I would also relate your work to distillation/model compression which tries to get one network to behave like another. On that point, is it important to train the forward and backward network jointly or could the backward network be pre-trained?[work-NEU], [CMP-NEU, EMP-NEU]",work,,,,,,CMP,EMP,,,,NEU,,,,,,NEU,NEU,,, 8909,"In section 2, it is not obvious to me that the regularizer (4) would not be ignored in absence of regularization on the output matrix.[section-NEU], [EMP-NEG]",section,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 8910,"I mean, the regularizer could push h^b to small norm, compensating with higher norm for the output word embeddings.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8911,"Could you comment why this would not happen?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8912,"In Section 4.2, you need to refer to Table 2 in the text.[Section-NEU, Table-NEU, text-NEU], [PNF-NEU]",Section,Table,text,,,,PNF,,,,,NEU,NEU,NEU,,,,NEU,,,, 8913,"You also need to define the evaluation metrics used.[evaluation metrics-NEU], [EMP-NEU]",evaluation metrics,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8914,"In this section, why are you not reporting the results from the original Show&Tell paper?[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 8915,"How does your implementation compare to the original work?[implementation-NEU], [CMP-NEU]",implementation,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 8916,"On unconditional generation, your hypothesis on uncertainty is interesting and could be tested.[hypothesis-POS], [EMP-POS]",hypothesis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 8917,"You could inject uncertainty in the captioning task for instance, e.g. consider that multiple version of each word e.g. dogA, dogB, docC which are alternatively used instead of dog with predefined substitution rates.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8918,"Would your regularizer still be helpful there?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8919,"At which point would it break?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8923,"I think the fact that the authors demonstrate the viability of training VDFFNWSCs that could have, in principle, arbitrary nonlinearities and normalization layers, is somewhat valuable and as such I would generally be inclined towards acceptance,[acceptance-POS], [REC-POS]",acceptance,,,,,,REC,,,,,POS,,,,,,POS,,,, 8924,"even though the potential impact of this paper is limited because the training strategy proposed is (by deep learning standards) relatively complicated, requires tuning two additional hyperparameters in the initial value of lambda as well as the step size for updating lambda, and seems to have no significant advantage over just using skip connections throughout training.[potential impact-NEG, strategy-NEG], [IMP-NEG]",potential impact,strategy,,,,,IMP,,,,,NEG,NEG,,,,,NEG,,,, 8925,"So my rating based on the message of the paper would be 6/10. [rating-NEU], [REC-NEU]",rating,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 8927,"As long as those issues remain unresolved, my rating is at is but if those issues were resolved it could go up to a 6.[rating-NEU], [REC-NEU]",rating,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 8928,"+++ Section 3.1 problems +++ - I think the toy example presented in section 3.1 is more confusing than it is helpful because the skip connection you introduce in the toy example is different from the skip connection you introduce in VANs.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8929,"In the toy example, you add (1 - alpha)wx whereas in the VANs you add (1 - alpha)x.[example-NEG], [EMP-NEG]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8930,"Therefore, the type of vanishing gradient that is observed when tanh saturates, which you combat in the toy model, is not actually combated at all in the VAN model.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8931,"While it is true that skip connections combat vanishing gradients in certain situations, your example does not capture how this is achieved in VANs.[example-NEG], [EMP-NEG]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8932,"- The toy example seems to be an example where Lagrangian relaxation fails, not where it succeeds.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8933,"Looking at figure 1, it appears that you start out with some alpha < 1 but then immediately alpha converges to 1, i.e. the skip connection is eliminated early in training, because wx is further away from y than tanh(wx).[figure-NEG], [EMP-NEG]",figure,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8934,"Most of the training takes place without the skip connection.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8935,"In fact, after 10^4 iterations, training with and without skip connection seem to achieve the same error.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8936,"It appears that introducing the skip connection was next to useless and the model failed to recognize the usefulness of the skip connection early in training.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8937,"- Regarding the optimization algorithm involving alpha^* at the end of section 3: It looks to me like a hacky, unprincipled method with no guarantees that just happened to work in the particular example you studied.[section-NEG], [EMP-NEG]",section,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8938,"You motivate the choice of alpha^* by wanting to maximize the reduction in the local linear approximation to mathcal{C} induced by the update on w.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8939,"However, this reduction grows to infinity the larger the update is.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8940,"Does that mean that larger updates are always better?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8942,"If we wanted to reduce the size of the objective according to the local linear approximation, why wouldn't we choose infinitely large step sizes?[approximation-NEG], [EMP-NEG]",approximation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8943,"Hence, the motivation for the algorithm you present is invalid.[motivation-NEG], [EMP-NEG]",motivation,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8944,"Here is an example where this algorithm fails: consider the point (x,y,w,alpha,lambda) (100, sigma(100), 1.0001, 1, 1).[example-NEG], [EMP-NEG]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8945,"Here, w has almost converged to its optimum w* 1.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8946,"Correspondingly, the derivative of C is a small negative value.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8947,"However, alpha* is actually 0, and this choice would catapult w far away from w*.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8948,"If I haven't made a mistake in my criticisms above, I strongly suggest removing section 3.1 entirely or replacing it with a completely new example that does not suffer from the above issues.[section-NEG, example-NEG, issues-NEG], [EMP-NEG, PNF-NEG]",section,example,issues,,,,EMP,PNF,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 8950,"In the VAN initial state (alpha 0.5), both the residual path and the skip path are multiplied by 0.5 whereas for ResNet, neither is multiplied by 0.5.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 8951,"Because of this, the experimental results between the two architectures are incomparable.[results-NEG], [CMP-NEG]",results,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 8953,"I disagree. Let's look at an example. Consider ResNet first.[example-NEG], [EMP-NEG]",example,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8954,"It can be written as x + r_1 + r_2 + .. + r_B, where r_b is the value computed by residual block b.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8960,"Therefore, there is an open question: are the differences in results between VAN and ResNet in your experiments caused by the removal of skip connections during training or by this scaling?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8961,"Without this information, the experiments have limited value.[information-NEG], [EMP-NEG, SUB-NEG]",information,,,,,,EMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 8963,"If my assessment of the situation is correct, I would like to ask you to repeat your experiments with the following two settings: - ResNet where after each block you multiply the result of the addition by 0.5, i.e. x_{l+1} 0.5mathcal{F}(x_l) + 0.5x_l[experiments-NEG], [SUB-NEG]",experiments,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 8967,"+++ writing issues +++ Title: - VARIABLE ACTIVATION NETWORKS: A SIMPLE METHOD TO TRAIN DEEP FEED-FORWARD NETWORKS WITHOUT SKIP-CONNECTIONS This title can be read in two different ways.[title-NEG], [CLA-NEG]",title,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8973,"In (B), the `without skip-connections' modifies `deep feed-forward networks' and suggests that the network trained has no skip connections. You must mean (B), because (A) is false.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8974,"Since it is not clear from reading the title whether (A) or (B) is true, please reword it.[title-NEG], [CLA-NEG]",title,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8975,"Abstract: - Part of the success of ResNets has been attributed to improvements in the conditioning of the optimization problem (e.g., avoiding vanishing and shattered gradients).[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8979,"However, nowhere in your paper do you show that trained VANs have less exploding / vanishing gradients than fully-connected networks trained the old-fashioned way. Again, please reword or include evidence. - where the proposed method is shown to outperform many architectures without skip-connections Again, this sentence makes no sense to me.[proposed method-NEG], [EMP-NEG]",proposed method,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8980,"It seems to imply that VAN has skip connections.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8981,"But in the abstract you defined VAN as an architecture without skip connections.[abstract-NEG], [EMP-NEG]",abstract,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8982,"Please make this more clear.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 8986,"section 3.1: - replace to to by to in the second line[section-NEG], [CLA-NEG]",section,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8987,"section 4: - This may be a result of the ensemble nature of ResNets (Veit et al., 2016), which does not play a significant role until the depth of the network increases.[section-NEG, result-NEG], [CLA-NEG]",section,result,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 8988,"The ensemble nature of ResNet is a drawback, not an advantage, because it causes a lack of high-order co-adaptataion of layers.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 8989,"Therefore, it cannot contribute positively to the performance or ResNet.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8990,"As mentioned in earlier comments, please reword / clarify your use of activation function.[comments-NEG], [CLA-NEG]",comments,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 8992,"Change your claim that VAN is equivalent to PReLU.[claim-NEG], [EMP-NEG]",claim,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 8993,"Please include your description of how your method can be extended to networks which do allow for skip connections.[description-NEG], [SUB-NEU]",description,,,,,,SUB,,,,,NEG,,,,,,NEU,,,, 8994,"+++ Hyperparameters +++ Since the initial values of lambda and eta' are new hyperparameters, include the values you chose for them, explain how you arrived at those values and plot the curve of how lambda evolves for at least some of the experiments.[hyperparameters-NEG, values-NEG], [CLA-NEG]]",hyperparameters,values,,,,,CLA,,,,,NEG,NEG,,,,,NEG,,,, 8999,"This is a strong contribution[contribution-POS], [EMP-POS]",contribution,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9000,"In Table 2 the difference between inception scores for DCGAN and this approach seems significant to ignore.[Table-NEG, approach-NEG], [SUB-NEG]",Table,approach,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 9001,"The authors should explain more possibly.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 9002,"There is a typo in Page 2 u2013 For all these varaints, -variants.[typo-NEG, Page-NEG], [PNF-NEG]]",typo,Page,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 9006,"While basically the approach seems plausible, the issue is that the result is not compared to ordinary LSTM-based baselines.[result-NEG], [CMP-NEG]",result,,,,,,CMP,,,,,NEG,,,,,,NEG,,,, 9007,"While it is better than a conterpart of MLE (MaskedMLE), whether the result is qualitatively better than ordinary LSTM is still in question.[result-NEU], [CMP-NEU]",result,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 9008,"In fact, this is already appearent both from the model architectures and the generated examples: because the model aims to fill-in blanks from the text around (up to that time), generated texts are generally locally valid but not always valid globally. This issue is also pointed out by authors in Appendix A.2.[issue-NEU], [EMP-NEG]",issue,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 9009,"While the idea of using mask is interesting and important, I think if this idea could be implemented in another way, because it resembles Gibbs sampling where each token is sampled from its sorrounding context, while its objective is still global, sentence-wise.[idea-NEU], [EMP-NEU]",idea,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9010,"As argued in Section 1, the ability of obtaining signals token-wise looks beneficial at first, but it will actually break a global validity of syntax and other sentence-wise phenoma.[Section-NEU], [EMP-NEG]",Section,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 9011,"Based on the arguments above, I think this paper is valuable at least conceptually, but doubt if it is actually usable in place of ordinary LSTM (or RNN)-based generation.[paper-NEU], [EMP-NEU]",paper,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9012,"More arguments are desirable for the advantage of this paper, i.e. quantitative evaluation of diversity of generated text as opposed to LSTM-based methods.[arguments-NEU], [SUB-NEU]",arguments,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 9013,"*Based on the rebuttals and thorough experimental results, I modified the global rating.[rating-NEU], [REC-NEU]",rating,,,,,,REC,,,,,NEU,,,,,,NEU,,,, 9017,"The results seem to show that a delayed application of the regularization parameter leads to improved classification performance.[results-POS, performance-POS], [EMP-POS]",results,performance,,,,,EMP,,,,,POS,POS,,,,,POS,,,, 9018,"The proposed scheme, which delays the application of regularization parameter, seems to be in contrast of the continuation approach used in sparse learning.[proposed scheme-NEU], [EMP-NEU]",proposed scheme,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9020,"One may argue that the continuation approach is applied in the convex optimization case, while the one proposed in this paper is for non-convex optimization. [approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9021,"It would be interesting to see whether deep networks can benefit from the continuation approach, and the strong regularization parameter may not be an issue because the regularization parameter decreases as the optimization progress goes on.[approach-NEU], [EMP-NEU]",approach,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9022,"One limitation of the work, as pointed by the authors, is that experimental results on big data sets such as ImageNet is not reported.[limitation-NEG, experimental results-NEG], [SUB-NEG, IMP-NEU]",limitation,experimental results,,,,,SUB,IMP,,,,NEG,NEG,,,,,NEG,NEU,,, 9025,"The main positive point is that the performance does not degrade too much.[performance-NEU], [EMP-POS]",performance,,,,,,EMP,,,,,NEU,,,,,,POS,,,, 9026,"However, there are several important negative points which should prevent this work, as it is, from being published.[work-NEG], [REC-NEU]",work,,,,,,REC,,,,,NEG,,,,,,NEU,,,, 9027,"1. Why is this type of color channel modification relevant for real life vision?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9028,"The invariance introduced here does not seem to be related to any real world phenomenon. [null], [CMP-NEG]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 9029,"The nets, in principle, could learn to recognize objects based on shape only, and the shape remains stable when the color channels are changed.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9030,"2. Why is the crash car dataset used in this scenario?[null], [IMP-NEG]",null,,,,,,IMP,,,,,,,,,,,NEG,,,, 9031,"It is not clear to me why this types of theoretical invariance is tested on such as specific dataset.[dataset-NEU], [EMP-NEG]",dataset,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 9032,"Is there a real reason for that?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9033,"3. The writing could be significantly improved, both at the grammatical level and the level of high level organization and presentation.[writing-NEG], [CLA-NEG, PNF-NEG]",writing,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 9034,"I think the authors should spend time on better motivating the choice of invariance used, as well as on testing with different (potentially new) architectures, color change cases, and datasets.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9035,"4. There is no theoretical novelty and the empirical one seems to be very limited, with less convincing results.[novelty-NEG, results-NEG], [NOV-NEG, EMP-NEG]",novelty,results,,,,,NOV,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 9039,"The paper does not really introduce new methods, and as such, this paper should be seen more as an application paper.[methods-NEG, paper-NEG], [APR-NEG, NOV-NEG]",methods,paper,,,,,APR,NOV,,,,NEG,NEG,,,,,NEG,NEG,,, 9040,"I think that such a paper could have merits if it would really push the boundary of the feasible, but I do not think that is really the case with this paper: the task still seems quite simplistic, and the empirical evaluation is not convincing (limited analysis, weak baselines).[paper-NEU, task-NEG, empirical evaluation-NEG], [EMP-NEG]",paper,task,empirical evaluation,,,,EMP,,,,,NEU,NEG,NEG,,,,NEG,,,, 9041,"As such, I do not really see any real grounds for acceptance.[acceptance-NEG], [REC-NEG]",acceptance,,,,,,REC,,,,,NEG,,,,,,NEG,,,, 9042,"Finally, there are also many other weaknesses. The paper is quite poorly written in places, has poor formatting (citations are incorrect and half a bibtex entry is inlined), and is highly inadequate in its treatment of related work.[paper-NEG, formatting-NEG, related work-NEG], [CLA-NEG, PNF-NEG]",paper,formatting,related work,,,,CLA,PNF,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 9046,"Overall, I see this as a paper which with improvements could make a nice workshop contribution, but not as a paper to be published at a top-tier venue.[paper-NEU, improvements-NEU], [APR-NEG]]",paper,improvements,,,,,APR,,,,,NEU,NEU,,,,,NEG,,,, 9047,"This work fits well into a growing body of research concerning the encoding of network topologies and training of topology via evolution or RL.[work-POS], [IMP-POS]",work,,,,,,IMP,,,,,POS,,,,,,POS,,,, 9049,"The biggest two nitpicks: > In our work we pursue an alternative approach: instead of restricting the search space directly, we allow the architectures to have flexible network topologies (arbitrary directed acyclic graphs) This is a gross overstatement.[work-NEU, alternative approach-NEU], [EMP-NEG]",work,alternative approach,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 9050,"The architectures considered in this paper are heavily restricted to be a stack of cells of uniform content interspersed with specifically and manually designed convolution, separable convolution, and pooling layers.[architectures-NEG], [EMP-NEG]",architectures,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9052,"The work is still great, but this misleading statement in the beginning of the paper left the rest of the paper with a dishonest aftertaste.[work-POS, statement-NEG, paper-NEG], [EMP-NEG]",work,statement,paper,,,,EMP,,,,,POS,NEG,NEG,,,,NEG,,,, 9053,"As an exercise to the authors, count the hyperparameters used just to set up the learning problem in this paper and compare them to those used in describing the entire VGG-16 network.[null], [CMP-NEU]",null,,,,,,CMP,,,,,,,,,,,NEU,,,, 9055,"to restrict the search space to reduce complexity and increase efficiency of architecture search.[paper-NEG], [EMP-NEG]",paper,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9056,"> Table 1 Why is the second best method on CIFAR (""Hier. repr-n, random search (7000 samples)"") never tested on ImageNet?[method-NEG], [SUB-NEG]",method,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9057,"The omission is conspicuous.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 9058,"Just test it and report.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 9060,""" ""Evolutionary Strategies"", at least as used in Salimans 2017, has a specific connotation of estimating and then following a gradient using random perturbations which this paper does not do.[paper-NEG], [SUB-NEG, CMP-NEG]",paper,,,,,,SUB,CMP,,,,NEG,,,,,,NEG,NEG,,, 9061,"It may be more clear to change this phrase to ""evolutionary methods"" or similar.[null], [CLA-NEG]",null,,,,,,CLA,,,,,,,,,,,NEG,,,, 9063,"A K 5% tournament does not seem more generic than a binary K 2 tournament. They're just different.[null], [CMP-NEG]]",null,,,,,,CMP,,,,,,,,,,,NEG,,,, 9070,"Intuitively, one can see why this may be advantageous as one gets some information from the past.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9071,"(As an aside, the authors of course acknowledge that recurrent neural networks have been used for this purpose with varying degrees of success.)[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9072,"The first question, had a quite an interesting and cute answer. There is a (non-negative) importance weight associated with each state and a collection of states has weight that is simply the product of the weights.[answer-POS], [EMP-POS]",answer,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9073,"The authors claim (with some degree of mathematical backing) that sampling a memory of n states where the distribution over the subsets of past states of size n is proportional to the product of the weights is desired. And they give a cute online algorithm for this purpose.[algorithm-NEU], [EMP-NEU]",algorithm,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9075,". There is no easy way to fix this and for the purpose of sampling the paper simply treats the weights as immutable.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9076,"There is also a toy example created to show that this approach works well compared to the RNN based approaches.[approach-NEU], [CMP-NEU]",approach,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 9077,"Positives: - An interesting new idea that has potential to be useful in RL[idea-POS], [NOV-POS]",idea,,,,,,NOV,,,,,POS,,,,,,POS,,,, 9078,"- An elegant algorithm to solve at least part of the problem properly (the rest of course relies on standard SGD methods to train the various networks)[algorithm-POS], [EMP-POS]",algorithm,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9079,"Negatives: - The math is fudged around quite a bit with approximations that are not always justified[math-NEG], [EMP-NEG]",math,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9080,"- While overall the writing is clear, in some places I feel it could be improved[writing-NEU], [CLA-NEU]",writing,,,,,,CLA,,,,,NEU,,,,,,NEU,,,, 9081,". I had a very hard time understanding the set-up of the problem in Figure 2.[setup-NEG, Figure-NEG], [PNF-NEG]",setup,Figure,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 9084,"- The experiments only demonstrate the superiority of this method on an example chosen artificially to work well with this approach.[experiments-NEG, approach-NEU], [EMP-NEG]",experiments,approach,,,,,EMP,,,,,NEG,NEU,,,,,NEG,,,, 9087,"My main concerns are on the usage of the given observations.[observations-NEG], [IMP-NEG]",observations,,,,,,IMP,,,,,NEG,,,,,,NEG,,,, 9088,"1. Can the observations be used to explain more recent works?[observations-NEG, recent works-NEU], [CMP-NEU]",observations,recent works,,,,,CMP,,,,,NEG,NEU,,,,,NEU,,,, 9090,"However, as the authors mentioned, there are more recent works which give better performance than this one.[recent works-NEG, performance-NEG], [CMP-NEG]",recent works,performance,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 9091,"For example, we can use +1, 0, -1 to approximate the weights.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9093,"has also shown a carefully designed post-processing binary network can already give very good performance.[performance-NEG], [EMP-NEG]",performance,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9094,"So, how can the given observations be used to explain more recent works?[observations-NEG, recent works-NEU], [CMP-NEU]",observations,recent works,,,,,CMP,,,,,NEG,NEU,,,,,NEU,,,, 9095,"2. How can the given observations be used to improve Courbariaux, Hubara et al. (2016)?[observations-NEU], [EMP-NEU]",observations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9097,"From this perspective, I wish to see more mathematical analysis rather than just doing experiments and showing some interesting observations.[analysis-NEG, experiments-NEG, observations-NEU], [SUB-NEG]",analysis,experiments,observations,,,,SUB,,,,,NEG,NEG,NEU,,,,NEG,,,, 9098,"Besides, giving interesting observations is not good enough.[observations-NEG], [SUB-NEG]",observations,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9099,"I wish to see how they can be used to improve binary networks.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 9104,"Considered paper is one of the first approaches to learn GAN-type generative models.[paper-POS], [NOV-POS]",paper,,,,,,NOV,,,,,POS,,,,,,POS,,,, 9105,"Using PointNet architecture and latent-space GAN, the authors obtained rather accurate generative model.[model-POS], [EMP-POS]",model,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9106,"The paper is well written, results of experiments are convincing, the authors provided the code on the github, realizing their architectures.[paper-POS, results-POS, experiments-POS, architectures-NEU], [CLA-POS, EMP-POS]",paper,results,experiments,architectures,,,CLA,EMP,,,,POS,POS,POS,NEU,,,POS,POS,,, 9107,"Thus I think that the paper should be published.[paper-POS], [REC-POS]",paper,,,,,,REC,,,,,POS,,,,,,POS,,,, 9113,"There have existed several works which also provide surveys of attribute-aware collaborative filtering.[works-NEU], [CMP-NEU]",works,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 9114,"Hence, the contribution of this paper is limited, although the authors claim two differences between their work and the existing ones.[contribution-NEG, paper-NEG, work-NEG], [SUB-NEG, CMP-NEG]",contribution,paper,work,,,,SUB,CMP,,,,NEG,NEG,NEG,,,,NEG,NEG,,, 9115,"In particular, the advantages and disadvantages of different categories are not systematically compared, and hence the readers cannot get insightful comments and suggestions from this survey.[advantages and disadvantages-NEG], [EMP-NEG]",advantages and disadvantages,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9116,"n In general, survey papers are not very suitable for publication at conferences.[survey papers-NEG, conferences-NEG], [APR-NEG]]",survey papers,conferences,,,,,APR,,,,,NEG,NEG,,,,,NEG,,,, 9118,"It makes several important contributions, including extending the previously published bounds by Telgarsky et al. to tighter bounds for the special case of ReLU DNNs, giving a construction for a family of hard functions whose affine pieces scale exponentially with the dimensionality of the inputs, and giving a procedure for searching for globally optimal solution of a 1-hidden layer ReLU DNN with linear output layer and convex loss.[contributions-POS], [EMP-POS]",contributions,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9119,"I think these contributions warrant publishing the paper at ICLR 2018.[contributions-POS, paper-POS], [APR-POS, REC-POS]",contributions,paper,,,,,APR,REC,,,,POS,POS,,,,,POS,POS,,, 9120,"The paper is also well written, a bit dense in places, but overall well organized and easy to follow.[paper-POS], [CLA-POS, PNF-POS]",paper,,,,,,CLA,PNF,,,,POS,,,,,,POS,POS,,, 9121,"A key limitation of the paper in my opinion is that typically DNNs do not contain a linear final layer.[limitation-NEG, paper-NEG], [EMP-NEG]",limitation,paper,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 9122,"It will be valuable to note what, if any, of the representation analysis and global convergence results carry over to networks with non-linear (Softmax, e.g.) final layer.[representation analysis-NEU, results-NEU], [EMP-NEU]",representation analysis,results,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 9123,"I also think that the global convergence algorithm is practically unfeasible for all but trivial use cases due to terms like D^nw, would like hearing authors' comments in case I'm missing some simplification.[algorithm-NEG], [EMP-NEG]",algorithm,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9124,"One minor suggestion for improving readability is to explicitly state, whenever applicable, that functions under consideration are PWL.[null], [SUB-NEU, EMP-NEU]",null,,,,,,SUB,EMP,,,,,,,,,,NEU,NEU,,, 9125,"For example, adding PWL to Theorems and Corollaries in Section 3.1 will help. [Theorems-NEU, Section-NEU], [EMP-NEU]",Theorems,Section,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 9126,"Similarly would be good to state, wherever applicable, the DNN being discussed is a ReLU DNN.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 9130,"I have two problems with these claims: 1) Modern ConvNet architectures (Inception, ResNeXt, SqueezeNet, BottleNeck-DenseNets and ShuffleNets) don't have large fully connected layers.[claims-NEG], [EMP-NEG]",claims,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9131,"2) The authors reject the technique of 'Deep compression' as being impractical.[technique-NEU], [EMP-NEU]",technique,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9132,"I suspect it is actually much easier to use in practice as you don't have to a-priori know the correct level of sparsity for every level of the network.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 9133,"p3. What does 'normalized' mean?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9135,"p3. Are you using an L2 weight penalty?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9136,"If not, your fully-connected baseline may be unnecessarily overfitting the training data.[baseline-NEG], [EMP-NEG]",baseline,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9137,"p3. Table 1. Where do the choice of CL Junction densities come from?[Table-NEU], [EMP-NEU]",Table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9138,"Did you do a grid search to find the optimal level of sparsity at each level?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9139,"p7-8. I had trouble following the left/right & front/back notation.[p-NEU], [PNF-NEG]",p,,,,,,PNF,,,,,NEU,,,,,,NEG,,,, 9140,"p8. Figure 7. How did you decide which data points to include in the plots?[p-NEU, Figure-NEU], [EMP-NEU]",p,Figure,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 9142,"Congratulations on a very interesting and clear paper.[paper-POS], [CLA-POS]",paper,,,,,,CLA,,,,,POS,,,,,,POS,,,, 9143,"While ICLR is not focused on neuroscientific studies, this paper clearly belongs here as it shows what representations develop in recurrent networks that are trained on spatial navigation.[paper-POS], [APR-POS]",paper,,,,,,APR,,,,,POS,,,,,,POS,,,, 9145,"I found it is very interesting that the emergence of these representations was contingent on some regularization constraint.[representations-POS], [EMP-POS]",representations,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9146,"This seems similar to the visual domain where edge detectors emerge easily when trained on natural images with sparseness constraints as in Olshausen&Field and later reproduced with many other models that incorporate sparseness constraints.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 9147,"I do have some questions about the training itself.[training-NEU], [EMP-NEU]",training,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9148,"The paper mentions a metabolic cost that is not specified in the paper.[paper-NEG], [SUB-NEG]",paper,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9149,"This should be added.[null], [SUB-NEG]",null,,,,,,SUB,,,,,,,,,,,NEG,,,, 9151,"I am puzzled why is the error is coming down before the boundary interaction?[error-NEU], [EMP-NEU]",error,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9152,"Even more puzzling, why does this error go up again for the blue curve (no interaction)? Shouldn't at least this curve be smooth? [error-NEU], [EMP-NEU]",error,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9155,"On the positive side, the paper is mostly well-written, seems technically correct, and there are some results that indicate that the MSA is working quite well on relatively complex tasks.[paper-POS, results-POS], [CLA-POS, EMP-POS]",paper,results,,,,,CLA,EMP,,,,POS,POS,,,,,POS,POS,,, 9156,"On the negative side, there seems to be relatively limited novelty: we can think of MSA as one particular communication (i.e, star) configuration one could use is a multiagent system.[novelty-NEG], [NOV-NEG]",novelty,,,,,,NOV,,,,,NEG,,,,,,NEG,,,, 9157,"One aspect does does strike me as novel is the gated composition module, which allows differentiation of messages to other agents based on the receivers internal state.[gated composition module-POS], [NOV-POS]",gated composition module,,,,,,NOV,,,,,POS,,,,,,POS,,,, 9158,"(So, the *interpretation* of the message is learned). I like this idea,[idea-POS], [EMP-POS]",idea,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9159,"however, the results are mixed, and the explanation given is plausible, but far from a clearly demonstrated answer.[results-NEU, explanation-NEU], [EMP-NEG]",results,explanation,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 9161,"however the summed global signal is hand crafted information and does not facilitate an independently reasoning master agent.[issues-NEU], [SUB-NEU]",issues,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 9162,"-Please explain what is meant here by 'hand crafted information', my understanding is that the f^i in figure 1 of that paper are learned modules?[figure-NEU, modules-NEU], [PNF-NEU]",figure,modules,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 9163,"-Please explain what would be the differences with CommNet with 1 extra agent that takes in the same information as your 'master'.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9164,"*This relates also to this: Later we empirically verify that, even when the overall in- formation revealed does not increase per se, an independent master agent tend to absorb the same information within a big picture and effectively helps to make decision in a global manner.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9167,"Specifically, we compare the performance among the CommNet model, our MS-MARL model without explicit master state (e.g. the occupancy map of controlled agents in this case), and our full model with an explicit occupancy map as a state to the master agent.[performance-NEU], [EMP-NEU]",performance,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9168,"As shown in Figure 7 (a)(b), by only allowed an independently thinking master agent and communication among agents, our model already outperforms the plain CommNet model which only supports broadcast- ing communication of the sum of the signals.[model-POS], [EMP-NEU]",model,,,,,,EMP,,,,,POS,,,,,,NEU,,,, 9169,"-Minor: I think that the statement which only supports broadcast-ing communication of the sum of the signals is not quite fair: surely they have used a 1-channel communication structure, but it would be easy to generalize that.[statement-NEG], [EMP-NEG]",statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9170,"-Major: When I look at figure 4D, I see that the proposed approach *also* only provides the master with the sum (or really mean) with of the individual messages...? So it is not quite clear to me what explains the difference. *In 4.4, it is not quite clear exactly how the figure of master and slave actions is created.[proposed approach-NEG, figure-NEU], [EMP-NEU]",proposed approach,figure,,,,,EMP,,,,,NEG,NEU,,,,,NEU,,,, 9171,"This seems to suggest that the only thing that the master can communicate is action information?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9173,"* In table 2, it is not clear how significant these differences are.[table-NEG], [PNF-NEG]",table,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9174,"What are the standard errors?[standard errors-NEU], [EMP-NEG]",standard errors,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 9175,"* The section 3.2 explains standard things (policy gradient), but the details are a bit unclear.[section-NEG], [SUB-NEG]",section,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9176,"In particular, I do not see how the Gaussian/softmax layers are integrated; they do not seem to appear in figure 4?[figure-NEG], [SUB-NEG]",figure,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9177,"* I cannot understand figure 7 without more explanation.[figure-NEG, explanation-NEG], [SUB-NEG]",figure,explanation,,,,,SUB,,,,,NEG,NEG,,,,,NEG,,,, 9178,"(The background is all black - did something go wrong with the pdf?)[background-NEG], [PNF-NEG]",background,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9179,"Details: * references are wrongly formatted throughout.[references-NEG], [PNF-NEG]",references,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9180,"* In this regard, we are among the first to combine both the centralized perspective and the decentralized perspective This is a weak statement (E.g., I suppose that in the greater scheme of things all of us will be amongst the first people that have walked this earth...)[null], [NOV-NEG]",null,,,,,,NOV,,,,,,,,,,,NEG,,,, 9183,"Can it be made crisper?[null], [PNF-NEU]",null,,,,,,PNF,,,,,,,,,,,NEU,,,, 9184,"* Note here that, although we explicitly input an occupancy map to the master agent, the actual infor- mation of the whole system remains the same.[information-NEU], [EMP-NEU]",information,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9185,"This is a somewhat peculiar statement.[statement-NEG], [PNF-NEG]",statement,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9186,"Clearly, the distribution of information over the agents is crucial.[information-NEU], [EMP-NEU]",information,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9191,"This works because each variable (of the state space) is modified in turn, so that the resulting update is invertible, with a tractable transformation inspired by Dinh et al 2016.[variable-NEU, update-NEU], [CMP-NEU]",variable,update,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 9192,"Overall, I believe this paper is of good quality, clearly and carefully written, and potentially accelerates mixing in a state-of-the-art MCMC method, HMC, in many practical cases.[paper-POS], [CLA-POS, EMP-POS]",paper,,,,,,CLA,EMP,,,,POS,,,,,,POS,POS,,, 9194,"The experimental section proves the usefulness of the method on a range of relevant test cases; in addition, an application to a latent variable model is provided sec5.2.[section-POS, method-POS, sec-POS], [EMP-POS]",section,method,sec,,,,EMP,,,,,POS,POS,POS,,,,POS,,,, 9195,"Fig 1a presents results in terms of numbers of gradient evaluations, but I couldn't find much in the way of computational cost of L2HMC in the paper. [Fig-NEG, results-NEG, paper-NEU], [SUB-NEG, EMP-NEG]",Fig,results,paper,,,,SUB,EMP,,,,NEG,NEG,NEU,,,,NEG,NEG,,, 9196,"I can't see where the number 124x in sec 5.1 stems from.[sec-NEG], [CLA-NEG]",sec,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 9197,"As a user, I would be interested in the typical computational cost of both MCMC sampler training and MCMC sampler usage (inference?), compared to competing methods.[competing methods-NEU], [SUB-NEU]",competing methods,,,,,,SUB,,,,,NEU,,,,,,NEU,,,, 9198,"This is admittedly hard to quantify objectively, but just an order of magnitude would be helpful for orientation.[null], [SUB-NEU]",null,,,,,,SUB,,,,,,,,,,,NEU,,,, 9199,"Would it be relevant, in sec5.1, to compare to other methods than just HMC, eg LAHMC?[sec-NEG], [CMP-NEG, SUB-NEG]",sec,,,,,,CMP,SUB,,,,NEG,,,,,,NEG,NEG,,, 9200,"I am missing an intuition for several things: eq7, the time encoding defined in Appendix C Appendix Fig5, I cannot quite see how the caption claim is supported by the figure (just hardly for VAE, but not for HMC).[eq-NEG, Appendix-NEG, Fig-NEG, figure-NEG], [PNF-NEG, CLA-NEG]",eq,Appendix,Fig,figure,,,PNF,CLA,,,,NEG,NEG,NEG,NEG,,,NEG,NEG,,, 9202,"# Minor errors - sec1: The sampler is trained to minimize a variation: should be maximize as well as on a the real-world[sec-NEG], [EMP-NEG]",sec,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9203,"- sec3.2 and 1/2 v^T v the kinetic: energy missing[sec-NEG], [SUB-NEG]",sec,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9204,"- sec4: the acronym L2HMC is not expanded anywhere in the paper[sec-NEG], [CLA-NEG, PNF-NEG]",sec,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 9205,"The sentence We will denote the complete augmented...p(d) might be moved to after from a uniform distribution in the same paragraph.[sentence-NEU, paragraph-NEU], [PNF-NEU]",sentence,paragraph,,,,,PNF,,,,,NEU,NEU,,,,,NEU,,,, 9206,"In paragraph starting We now update x: - specify for clarity: the first update, which yields x' / the second update, which yields x'' [paragraph-NEG], [CLA-NEG, PNF-NEG]",paragraph,,,,,,CLA,PNF,,,,NEG,,,,,,NEG,NEG,,, 9207,"- only affects $x_{bar{m}^t}$: should be $x'_{bar{m}^t}$ (prime missing) [null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 9208,"- the syntax using subscript m^t is confusing to read; wouldn't it be clearer to write this as a function, eg mask(x',m^t)?[syntax-NEG], [PNF-NEG, CLA-NEG]",syntax,,,,,,PNF,CLA,,,,NEG,,,,,,NEG,NEG,,, 9209,"- inside zeta_2 and zeta_3, do you not mean $m^t and $bar{m}^t$ ?[null], [PNF-NEG]",null,,,,,,PNF,,,,,,,,,,,NEG,,,, 9210,"- sec5: add reference for first mention of A NICE MC[sec-NEG, reference-NEG], [PNF-NEG]",sec,reference,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 9211,"- Appendix A: - Let's -> Let [Appendix-NEG], [PNF-NEG]",Appendix,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9212,"- eq12 should be x'' ... -[eq-NEG], [PNF-NEG]",eq,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9213,"Appendix C: space missing after Section 5.1[Appendix-NEG, Section-NEG], [PNF-NEG]",Appendix,Section,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 9214,"- Appendix D1: In this section is presented : sounds odd[Appendix-NEG, section-NEG], [PNF-NEG]",Appendix,section,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 9215,"n- Appendix D3: presumably this should consist of the figure 5 ? Maybe specify.[Appendix-NEG, figure-NEG], [PNF-NEG]]",Appendix,figure,,,,,PNF,,,,,NEG,NEG,,,,,NEG,,,, 9218,"Strengths: The proposed method has achieved a better convergence rate in different tasks than all other hand-engineered algorithms.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9219,"The proposed method has better robustess in different tasks and different batch size setting.[proposed method-POS], [EMP-POS]",proposed method,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9220,"The invariant of coordinate permutation and the use of block-diagonal structure improve the efficiency of LQG.[null], [EMP-POS]",null,,,,,,EMP,,,,,,,,,,,POS,,,, 9221,"Weaknesses: 1. Since the batch size is small in each experiment, it is hard to compare convergence rate within one epoch.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9222,"More iterations should be taken and the log-scale style figure is suggested.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9223,"2. In Figure 1b, L2LBGDBGD converges to a lower objective value, while the other figures are difficult to compare, the convergence value should be reported in all experiments.[Figure-NEU, experiments-NEG], [CMP-NEG]",Figure,experiments,,,,,CMP,,,,,NEU,NEG,,,,,NEG,,,, 9224,"3. ""The average recent iterate"" described in section 3.6 uses recent 3 iterations to compute the average, the reason to choose ""3"", and the effectiveness of different choices should be discussed, as well as the ""24"" used in state features.[section-NEU], [EMP-NEU]",section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9225,"4. Since the block-diagonal structure imposed on A_t, B_t, and F_t, how to choose a proper block size?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9226,"Or how to figure out a coordinate group?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9227,"5. The caption in Figure 1,3, ""with 48 input and hidden units"" should clarify clearly.[Figure-NEG], [CLA-NEG]",Figure,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 9228,"The curves of different methods are suggested to use different lines (e.g., dashed lines) to denote different algorithms rather than colors only.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9229,"6. typo: sec 1 parg 5, ""current iterate"" -> ""current iteration"".[typo-NEG], [CLA-NEG]",typo,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 9231,"by Li & Malik, this paper tends to solve the high-dimensional problem.[paper-NEU], [CMP-NEU]",paper,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 9232,"With the new observation of invariant in coordinates permutation in neural networks, this paper imposes the block-diagonal structure in the model to reduce the complexity of LQG algorithm.[paper-NEU, model-NEU], [EMP-NEU]",paper,model,,,,,EMP,,,,,NEU,NEU,,,,,NEU,,,, 9239,"I could not find any technical contribution or something sufficiently mature and interesting for presenting in ICLR.[technical contribution-NEG], [APR-NEG]",technical contribution,,,,,,APR,,,,,NEG,,,,,,NEG,,,, 9240,"Some issues: - submission is supposed to be double blind but authors reveal their identity at the start of section 2.1.[section-NEG], [PNF-NEG]",section,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9241,"- implementation details all over the place (section 3. is called Implementation, but at that point no concrete idea has been proposed, so it seems too early for talking about tensorflow and keras).[implementation details-NEG, section-NEG], [PNF-NEG, EMP-NEG]",implementation details,section,,,,,PNF,EMP,,,,NEG,NEG,,,,,NEG,NEG,,, 9245,"2) though the non-saturating variant (see Eq. 3) of ``standard GAN'' may converge towards a minimum of the Jensen-Shannon divergence, it does not mean that the minimization process follows gradients of the Jensen-Shannon divergence (and conversely, following gradient paths of the Jensen-Shannon divergence may not converge towards a minimum, but this was rather the point of the previous critiques about ``standard GAN''). [null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9246,"3) the penalization strategies introduced for ``non-standard GAN'' with specific motivations, may also apply successfully to the ``standard GAN'', improving robustness, thereby helping to set hyperparameters.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9248,"Overall, I believe that the paper provides enough material to substantiate these claims, even if the message could be better delivered.[claims-NEU], [SUB-POS]",claims,,,,,,SUB,,,,,NEU,,,,,,POS,,,, 9249,"In particular, the writing is sometimes ambiguous (e.g. in Section 2.3, the reader who did not follow the recent developments on the subject on arXiv will have difficulties to rebuild the cross-references between authors, acronyms and formulae).[writing-NEG], [CLA-NEG]",writing,,,,,,CLA,,,,,NEG,,,,,,NEG,,,, 9250,"The answers to the critiques referenced in the paper are convincing, though I must admit that I don't know how crucial it is to answer these critics, since it is difficult to assess wether they reached or will reach a large audience.[answers-POS], [IMP-NEU]",answers,,,,,,IMP,,,,,POS,,,,,,NEU,,,, 9251,"Details: - p. 4 please do not qualify KL as a distance metric [null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 9252,"- Section 4.3: Every GAN variant was trained for 200000 iterations, and 5 discriminator updates were done for each generator update is ambiguous: what is exactly meant by iteration (and sometimes step elsewhere)?[Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9253,"- Section 4.3: the performance measure is not relevant regarding distributions. The l2 distance is somewhat OK for means, but it makes little sense for covariance matrices. [Section-NEU], [EMP-NEU]",Section,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9260,"For the JCP-S model, the loss function is unclear to me.[model-NEG], [EMP-NEG]",model,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9261,"L is defined for 3rd order tensors only; how is the extended to n > 3?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9262,"Intuitively it seems that L is redefined, and for, say, n 4, the model is M(i,j,k,n) sum_1^R u_ir u_jr u_kr u_nr.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9263,"However, the statement since we are using at most third order tensors in this work I am further confused.[statement-NEG], [EMP-NEG]",statement,,,,,,EMP,,,,,NEG,,,,,,NEG,,,, 9264,"Is it just that JCP-S also incorporates 2nd order embeddings?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9265,"I believe this requires clarification in the manuscript itself.[manuscript-NEU], [EMP-NEG]",manuscript,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 9266,"For the evaluations, there are no other tensor-based methods evaluated, although there exist several well-known tensor-based word embedding models existing: Pengfei Liu, Xipeng Qiuu2217 and Xuanjing Huang, Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model, IJCAI 2015 Jingwei Zhang and Jeremy Salwen, Michael Glass and Alfio Gliozzo.[evaluations-NEG], [CMP-NEU]",evaluations,,,,,,CMP,,,,,NEG,,,,,,NEU,,,, 9268,"Additionally, since it seems the main benefit of using a tensor-based method is that you can use 3rd order cooccurance information, multisense embedding methods should also be evaluated.[methods-NEU], [EMP-NEU]",methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9269,"There are many such methods, see for example Jiwei Li, Dan Jurafsky, Do Multi-Sense Embeddings Improve Natural Language Understanding?[methods-NEU], [EMP-NEU]",methods,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9270,"and citations within, plus quick googling for more recent works.[citations-NEU], [EMP-NEU]",citations,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9271,"I am not saying that these works are equivalent to what the authors are doing, or that there is no novelty, but the evaluations seem extremely unfair to only compare against matrix factorization techniques, when in fact many higher order extensions have been proposed and evaluated, and especially so on the tasks proposed (in particular the 3-way outlier detection).[novelty-NEU, evaluations-NEG], [CMP-NEG, EMP-NEG]",novelty,evaluations,,,,,CMP,EMP,,,,NEU,NEG,,,,,NEG,NEG,,, 9272,"Observe also that in table 2, NNSE gets the highest performance in both MEN and MTurk.[table-NEU], [EMP-NEU]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9273,"Frankly this is not very surprising; matrix factorization is very powerful, and these simple word similarity tasks are well-suited for matrix factorization.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 9274,"So, statements like as we can see, our embeddings very clearly outperform the random embedding at this task is an unnecessary inflation of a result that 1) is not good[statements-NEG, result-NEG], [EMP-NEG]",statements,result,,,,,EMP,,,,,NEG,NEG,,,,,NEG,,,, 9275,"and 2) is reasonable to not be good.[null], [EMP-NEG]",null,,,,,,EMP,,,,,,,,,,,NEG,,,, 9276,"Overall, I think for a more sincere evaluation, the authors need to better pick tasks that clearly exploit 3-way information and compare against other methods proposed to do the same.[evaluation-NEU], [EMP-NEG]",evaluation,,,,,,EMP,,,,,NEU,,,,,,NEG,,,, 9277,"The multiplicative relation analysis is interesting,[analysis-POS], [EMP-POS]",analysis,,,,,,EMP,,,,,POS,,,,,,POS,,,, 9278,"but at this point it is not clear to me why multiplicative is better than additive in either performance or in giving meaningful interpretations of the model.[performance-NEU, model-NEU], [EMP-NEG]",performance,model,,,,,EMP,,,,,NEU,NEU,,,,,NEG,,,, 9279,"In conclusion, because the novelty is also not that big (CP decomposition for word embeddings is a very natural idea) I believe the evaluation and analysis must be significantly strengthened for acceptance. [novelty-NEG], [NOV-NEG, IMP-NEG, REC-NEG]",novelty,,,,,,NOV,IMP,REC,,,NEG,,,,,,NEG,NEG,NEG,, 9281,"Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the luckiness of the license plate number.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9282,"The authors mentions other papers that use NN's to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings.[papers-NEU, proposed model-NEU], [CMP-NEU]",papers,proposed model,,,,,CMP,,,,,NEU,NEU,,,,,NEU,,,, 9288,"In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better.[section-NEU], [CMP-NEU]",section,,,,,,CMP,,,,,NEU,,,,,,NEU,,,, 9290,"Sec 3 The author does not mention the following reference: Deep learning for stock prediction using numerical and textual information by Akita et al. that does incorporate non-numerical info to predict stock prices with deep networks.[Sec-NEG], [PNF-NEG]",Sec,,,,,,PNF,,,,,NEG,,,,,,NEG,,,, 9291,"Sec 4 What are the characters embedded with? This is important to specify.[Sec-NEU], [EMP-NEU]",Sec,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9292,"Is it Word2vec or something else?[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9293,"What does the lookup table consist of?[table-NEU], [EMP-NEU]",table,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9294,"References should be added to the relevant methods.[References-NEU], [EMP-NEU]",References,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9295,"Sec 5 I feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison.[Sec-NEU], [SUB-NEU, CMP-NEU]",Sec,,,,,,SUB,CMP,,,,NEU,,,,,,NEU,NEU,,, 9296,"LSTMs as well could have been a point of comparison.[null], [EMP-NEU]",null,,,,,,EMP,,,,,,,,,,,NEU,,,, 9297,"Sec 6 Nothing too insightful is said about the RNN Model.[Sec-NEG], [SUB-NEG]",Sec,,,,,,SUB,,,,,NEG,,,,,,NEG,,,, 9298,"Sec 7 The ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table.[Sec-NEG, results-NEG], [CMP-NEG]",Sec,results,,,,,CMP,,,,,NEG,NEG,,,,,NEG,,,, 9299,"Overall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN.[problem-NEU], [EMP-NEU]",problem,,,,,,EMP,,,,,NEU,,,,,,NEU,,,, 9300,"However comparisons are lacking and the paper is not presented very scientifically.[comparisons-NEG, paper-NEG], [SUB-NEG, CMP-NEG, PNF-NEG]",comparisons,paper,,,,,SUB,CMP,PNF,,,NEG,NEG,,,,,NEG,NEG,NEG,, 9301,"The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well.[comparisons-NEG, approaches-NEG], [SUB-NEG, CMP-NEG]]",comparisons,approaches,,,,,SUB,CMP,,,,NEG,NEG,,,,,NEG,NEG,,,