doc_id
stringlengths
9
9
text
sequence
labels
sequence
SJ45Qm8Zz
[ "The paper makes a striking connection between two apparently unrelated problems: the problem of designing neural networks to handle a certain type of correlation and the problem of designing a structure to represent wave-function with quantum entanglement.", "In the wave-function context, the Schmidt decomposition of the wave function is an inner product of tensors.", "Thus, the mathematical glue connecting the neural networks and quantum entanglement is shown to be tensor networks,", "which can represent higher order tensors through inner product of lower-order tensors.", "The main technical contribution in the paper is to map convolutional networks with product pooling function (called ConvACs) to a tensor network.", "Given this mapping, the authors exploit results in tensor networks (in particular the quantum max-flow min-cut theorem) to calculate the rank of the matricized tensor between a pair of vertex sets using the (appropriately defined) min-cut.", "The connection has potential to yield fruitful new results,", "however, the potential is not manifested (yet) in the paper.", "The main application in deep convolutional networks proposed by the paper is to model how much correlation between certain partition of input variables can be captured by a given convolutional network design.", "However, it is unclear how to use Theorem 1 to design neural networks that capture a certain correlation.", "A simple example is given in the experiment where the wider layers can be either early in the the neural network or at the later stages; demonstrating that one does better than the other in a certain regime.", "It seems that there is an obvious intuition that explains this phenomenon: wider base networks with large filters are better suited to the global task and narrow base networks that have more parameters later down have more local early filters suited to the local task.", "The experiments do not quite reveal the power of the proposed approach,", "and it is unclear how, if at all, the proposed approach can be applied to more complicated networks.", "In summary, this paper is of high theoretical interest and has potential for future applications." ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation" ]
SkPNib9ez
[ "This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem.", "This paper requires a large amount of background knowledge", "as it depends on understanding program synthesis as it is done in the programming languages community.", "Moreover the work mentions a neurally-guided search,", "but little time is spent on that portion of their contribution.", "I am not even clear how their system is trained.", "The experimental results do show the programs can be faster but only if the user is willing to suffer a loss in accuracy.", "It is difficult to conclude overall if the technique helps in synthesis." ]
[ "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation" ]
HJ0Hc82gM
[ "Review for Deformation of Bregman divergence and its application", "Summary:This paper considers parameter estimation for discrete probability models.", "The authors propose an estimator that is computed by minimizing a deformed Bregman divergence.", "The authors prove that the proposed estimator is more computationally efficient and more robust than the maximal likelihood estimator (MLE), both in theory and simulation.", "Major Comments:1. After the definition 1, the likelihood $L(\\theta)$ is defined to be the sum of $\\log \\bar{q}_{\\theta}(x_i).", "$ Why the gradient of $L(\\theta)$ is a related to $\\tilde{p}$.", "2. After the equation (4), when the authors say $f=(U’)^{-1}$ the authors assume that the first order derivative of $U$ should be a strictly increasing function", "(otherwise the inverse function is not well defined, at least in classic notations).", "I would like to know whether we only need assume the convexity of $U$.", "Are there other assumptions?", "3. In Proposition 1, I think the “Fisher consistent” means that (6) holds for any reasonable $U$ and $f$ just as the authors said before Proposition 1.", "It is better to add this in the statement of Proposition 1 too.", "4. The “Proof 1” is better to be replaced with “Proof of Proposition 1”", "(same issues for “Proof 2”, “Proof 3”, etc).", "5. In the statement of Theorem 1, do the authors have any constraint for $U$?", "6. $\\xi_{U,f}$ appears in Theorem 2 without a clear definition.", "Even if it seems to be defined in (17), it is better to be defined again.", "7. Why Theorem 2 indicates that “the estimator (5) is not influenced so much by the outlier”?", "8. How to solve (5)?", "Is it trivial?", "I expect to see something like “We use … algorithm or toolbox to solve (5)“.", "Minor Comments:1. In Example 2, I suggest use some more beautiful symbol like $\\top$ to denote the transpose instead of $T$.", "2. The length of the equations should not exceed the line-width (e.g., (4) and (7)).", "3. In page 5, “We find some examples satisfying 25 in Theorem 2”.", "The “25” should be “(25)”." ]
[ "non-arg", "fact", "fact", "fact", "fact", "non-arg", "fact", "evaluation", "non-arg", "non-arg", "fact", "request", "request", "request", "request", "evaluation", "request", "evaluation", "request", "request", "request", "request", "request", "quote", "request" ]
ry9X12Fgz
[ "The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space. ", "On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus. ", "A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures. ", "In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. ", "At least one of those papers ought to be cited. ", "It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network. ", "Without that baseline it's not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway. ", "The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions. ", "A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well. ", "A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot." ]
[ "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "request" ]
SyOiDTtef
[ "The paper proposes an online distillation method, called co-distillation, where the two different models are trained to match the predictions of other model in addition to minimizing its own loss. ", "The proposed method is applied to two large-scale datasets ", "and showed to perform better than other baselines such as label smoothing, and the standard ensemble. ", "The paper is clearly written and was easy to understand. ", "My major concern is the significance and originality of the proposed method. ", "As written by the authors, the main contribution of the paper is to apply the codistillation method, which is pretty similar to Zhang et. al (2017), at scale. ", "But, because from Zhang's method, I don't see any significant difficulty in applying to large-scale problems, ", "I'm not sure that this can be a significant contribution. ", "Rather, I think, it would have been better for the authors to apply the proposed methods to a smaller scale problems as well in order to explore more various aspects of the proposed methods including the effects of number of different models. ", "In this sense, it is also a limitation that the authors showing experiments where only two models are codistillated. ", "Usually, ensemble becomes stronger as the number of model increases." ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation" ]
ByPQQOX1G
[ "Summary ======== The authors present a new regularization term, inspired from game theory, which encourages the discriminator's gradient to have a norm equal to one.", "This leads to reduce the number of local minima,", "so that the behavior of the optimization scheme gets closer to the optimization of a zero-sum games with convex-concave functions.", "Clarity ====== Overall, the paper is clear and well-written.", "However, the authors should motivate better the regularization introduced in section 2.3.", "Originality ========= The idea is novel and interesting.", "In addition, it is easy to implement it for any GANs since it requires only an additional regularization term.", "Moreover, the numerical experiments are in favor of the proposed method.", "Comments ========= - Why should the norm of the gradient should to be equal to 1 and not another value?", "Is this possible to improve the performance if we put an additional hyper-parameter instead?", "- Are the performances greatly impacted by other value of lambda and c (the suggested parameter values are lambda = c = 10)?", "- As mentioned in the paper, the regularization affects the modeling performance.", "Maybe the authors should add a comparison between different regularization parameters to illustrate the real impact of lambda and c on the performance.", "- GANs performance is usually worse on very big dataset such as Imagenet.", "Does this regularization trick makes their performance better?" ]
[ "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "request", "request", "request", "fact", "request", "fact", "request" ]
SJbxHpnHz
[ "Summary ------- This paper proposes a generative model of symbolic (MIDI) melody in western popular music.", "The model uses an LSTM architecture to map sequences of chord symbols and structural identifiers (e.g., verse or chorus) to predicted note sequences which constitute the melody line.", "The key innovation proposed in this work is to jointly encode note symbols along with timing and duration information to form musical \"words\" from which melodies are composed.", "The proposed model compares compared favorably to prior work in listener preference and Turing-test studies, and performs.", "Quality ------- Overall, I found the paper interesting, and the provided examples generated by the model sound relatively good.", "The quantitative evaluations seem promising,", "though difficult to interpret fully due to a lack of provided detail (see below).", "Apart from clarification issues enumerated below, where the paper could be most substantially improved is in the evaluation of the various ingredients of the model.", "Many ideas are presented with some abstract motivation,", "but there is no comparative evaluation to demonstrate what happens when any one piece is removed from the system.", "Some examples:- How important is the \"song part\" contextual input?", "- What happens if the duration or timing information is not encoded with the note?", "- How important is the pitch range regularization?", "Since the authors claim these ideas as novel,", "I would expect to see more evaluation of their independent impact on the resulting system.", "Without such an evaluation, it is difficult to take any general lessons away from this paper.", "Clarity ------- While the main ideas of this paper are presented clearly,", "I found the details difficult to follow.", "Specifically, the following points need to be substantially clarified: - The note encoding described in Section 3.1: \"w_i = (p_i, t_i, l_i)\" describes the pitch, timing, and duration of the i'th chord, but it is not explained how time and duration are represented.", "Since these are derived from MIDI, I would expect either ticks or seconds -- or maybe a tempo-normalized variant --", "but Figure 2 suggests staff notation, which is not explicitly coded in MIDI.", "Please explain precisely how the data is represented.", "- Also in Section 3, several references are made to a \"previous\" model, but no citation is given.", "Was this a specific published work?", "- Equation 1 is missing a variable (j) for the range of the summation.", "It took a few passes for me to parse what was going on here.", "One could easily mistake it for summing over i to describe partial subsequences,", "but I don't think this is what is intended.", "- \"... our model does not have to consider intervals that do not contain notes\" --", "this contradicts the implication of Figure 2b, where a rest is explicitly notated in the generated sequence.", "Since MIDI does not explicitly encode rests", "(they must be inferred from the absence of note events),", "I'd suggest wording this more carefully, and being more explicit about what is produced by the model and what is notational embellishment for expository purposes.", "- Equation 2 describes the LSTM gate equations,", "but there is no concrete description of the model architecture used in this paper.", "How are the hidden states mapped to note predictions?", "What is the loss function and optimizer?", "These details are necessary to facilitate replication.", "- Equation 3 and the accompanying text implies that song part states (x_i) are conditionally independent given the current chord state (z_i).", "Is that correct?", "If so, it seems like a strange choice,", "since I would expect a part state to persist across multiple chord state transitions.", "Please explain this part in more detail.", "Also a typo in the second factor: p(z_N | z_{N-1}) should be p(z_n | z_{n-1}); likewise p(x_n | z_N).", "- The regularization penalty (Alg. 1) is also difficult to follow.", "Is S derived from P by viterbi decoding, or independent (point-wise argmax) decoding?", "What exactly is the \"E\" that results in the derivative at step 8, and why does the derivative for p_i depend on the total sum C?", "This all seems non-obvious, and worth describing in more detail since it seems critical to the performance of the model.", "- Table 2: what does \"# samples\" mean in this context?", "Why is it different from \"# songs\"?", "- Section 4.2: the description of the evaluation suggests that the proposed model's output was always played before the baseline.", "Is that correct?", "If so, does that bias the results?", "- Section 4.2: are the examples provided to listeners just the melody lines, or full mixes on top of the input chord sequence?", "It's unclear from the text,", "and it seems like a relevant detail to correctly assess the fairness of the comparison to the baselines.", "- Section 4.2: how many generated examples were included in this evaluation?", "Should this instead be in or out of key, since the tuning is presumably fixed by MIDI synthesis?", "Originality ----------- As far as I know, the proposed method is novel, though strongly related to (cited) prior work.", "The key idea seems to be encoding of notes and properties as analogous to \"words\".", "I find this analogy a little bit of a stretch,", "since even with timing and duration included, it's hard to argue that a single note event has semantic content in the way that a word does.", "A little more development of this idea, and some more concrete motivation for the specific choices of which properties to include, would go a long way in strengthening the paper.", "Significance ------------ The significance of this work is difficult to assess without independent evaluation of the proposed novel components." ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "request", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "request", "request", "non-arg", "fact", "non-arg", "evaluation", "evaluation", "quote", "fact", "fact", "fact", "request", "fact", "fact", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "request", "request", "evaluation", "request", "request", "fact", "request", "request", "request", "evaluation", "fact", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation" ]
BJCXSFZgz
[ "This paper proposes leveraging labelled controlled data to accelerate reinforcement-based learning of a control policy. ", "It provides two main contributions: pre-training the policy network of a DDPG agent in a supervised manner so that it begins in reasonable state-action distribution and regalurizing the Q-updates of the q-network to be biased towards existing actions. ", "The authors use the TORCS enviroment to demonstrate the performance of their method both in final cumulative return of the policy and speed of learning.", "This paper is easy to understand but has a couple shortcomings and some fatal (but reparable) flaws:.", "1) When using RL please try to standardize your notation to that used by the community, ", "it makes things much easier to read. ", "I would strongly suggest avoiding your notation a(x|\\Theta) and using \\pi(x) ", "(subscripting theta or making conditional is somewhat less important). ", "Your a(.) function seems to be the policy here, ", "which is invariable denoted \\pi in the RL literature. ", "There has been recent effort to clean up RL notation which is presented here: ", "https://sites.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf. ", "You have no obligation to use this notation but it does make reading of your paper much easier on others in the community. ", "This is more of a shortcoming than a fundamental issue.", "2) More fatally, you have failed to compare your algorithm's performance against benchline implementations of similar algorithms. ", "It is almost trivial to run DDPG on Torcs using the openAI baselines package ", "[https://github.com/openai/baselines]. ", "I would have loved, for example, to see the effects of simply pre-training the DDPG actor on supervised data, vs. adding your mixture loss on the critic. ", "Using the baselines would have (maybe) made a very compelling graph showing DDPG, DDPG + actor pre-training, and then your complete method.", "3) And finally, perhaps complementary to point 2), you really need to provide examples on more than one environment. ", "Each of these simulated environments has its own pathologies linked to determenism, reward structure, and other environment particularities. ", "Almost every algorithm I've seen published will often beat baselines on one environment and then fail to improve or even be wors on others, ", "so it is important to at least run on a series of these. ", "Mujoco + AI Gym should make this really easy to do ", "(for reference, I have no relatinship with OpenAI). ", "Running at least cartpole (which is a very well understood control task), and then perhaps reacher, swimmer, half-cheetah etc. using a known contoller as your behavior policy (behavior policy is a good term for your data-generating policy.)", "4) In terms of state of the art you are very close to Todd Hester et. al's paper on imitation learning, ", "and although you cite it, you should contrast your approach more clearly with the one in that paper. ", "Please also have a look at some more recent work my Matej Vecerik, Todd Hester & Jon Scholz: 'Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards' for an approach that is pretty similar to yours.", "Overall I think your intuitions and ideas are good, ", "but the paper does not do a good enough job justifying empirically that your approach provides any advantages over existing methods. ", "The idea of pre-training the policy net has been tried before ", "(although I can't find a published reference) ", "and in my experience will help on certain problems, and hinder on others, ", "primarily because the policy network is already 'overfit' somewhat to the expert, and may have a hard time moving to a more optimal space. ", "Because of this experience I would need more supporting evidence that your method actually generalizes to more than one RL environment." ]
[ "fact", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "reference", "request", "evaluation", "evaluation", "evaluation", "reference", "request", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "reference", "request", "evaluation", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "non-arg", "evaluation", "request" ]
BJHcawFxM
[ "This paper proposes training binary and ternary weight distribution networks through the local reparametrization trick and continuous optimization. ", "The argument is that due to the central limit theorem (CLT) the distribution on the neuron pre-activations is approximately Gaussian, with a mean given by the inner product between the input and the mean of the weight distribution and a variance given by the inner product between the squared input and the variance of the weight distribution. ", "As a result, the parameters of the underlying discrete distribution can be optimized via backpropagation by sampling the neuron pre-activations with the reparametrization trick. ", "The authors further propose appropriate initialisation schemes and regularization techniques to either prevent the violation of the CLT or to prevent underfitting. ", "The method is evaluated on multiple experiments.", "This paper proposed a relatively simple idea for training networks with discrete weights that seems to work in practice. ", "My main issue is that while the authors argue about novelty, ", "the first application of CLT for sampling neuron pre-activations at neural networks with discrete r.v.s is performed at [1]. ", "While [1] was only interested in faster convergence and not on optimization of the parameters of the underlying distribution, ", "the extension was very straightforward. ", "I would thus suggest that the authors update the paper accordingly. ", "Other than that, I have some other comments: - The L2 regularization on the distribution parameters for the ternary weights is a bit ad-hoc; ", "why not penalise according to the entropy of the distribution which is exactly what you are trying to achieve? ", "- For the binary setting you mentioned that you had to reduce the entropy thus added a “beta density regulariser”. ", "Did you add R(p) or log R(p) to the objective function? ", "Also, with alpha, beta = 2 the beta density is unimodal with a peak at p=0.5; ", "essentially this will force the probabilities to be close to 0.5, i.e. exactly what you are trying to avoid. ", "To force the probability near the endpoints you have to use alpha, beta < 1 which results into a “bowl” shaped Beta distribution. ", "I thus wonder whether any gains you observed from this regulariser are just an artifact of optimization.", "- I think that a baseline (at least for the binary case) where you learn the weights with a continuous relaxation, such as the concrete distribution, and not via CLT would be helpful. ", "Maybe for the network to properly converge the entropy for some of the weights needs to become small (hence break the CLT). ", "[1] Wang & Manning, Fast Dropout Training." ]
[ "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "request", "fact", "non-arg", "fact", "fact", "fact", "fact", "request", "fact", "reference" ]
Hy2FsQKef
[ "This paper addresses the problem of one class classification.", "The authors suggest a few techniques to learn how to classify samples as negative (out of class) based on tweaking the GAN learning process to explore large areas of the input space which are out of the objective class.", "The suggested techniques are nice and show promising results.", "But I feel a lot can still be done to justify them, even just one of them.", "For instance, the authors manipulate the objective of G using a new parameter alpha_new and divide heuristically the range of its values.", "But, in the experimental section results are shown only for a single value, alpha_new=0.9", "The authors also suggest early stopping", "but again (as far as I understand) only a single value for the number of iterations was tested.", "The writing of the paper is also very unclear, with several repetitions and many typos e.g.:", "'we first introduce you a'", "'architexture'", "'future work remain to'", "'it self'", "I believe there is a lot of potential in the approach(es) presented in the paper.", "In my view a much stronger experimental section together with a clearer presentation and discussion could overcome the lack of theoretical discussion." ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "quote", "quote", "quote", "quote", "evaluation", "request" ]
H18ZJWAgG
[ "Summary The paper is well-written ", "but does not make deep technical contributions and does not present a comprehensive evaluation or highly insightful empirical results.", "Abstract / Intro I get the entire focus of the paper is some variant of Pac-Man which has received attention in the RL literature for Atari games, ", "but for the most part the impressive advances of previous Atari/RL papers are in the setting that the raw video is provided as input, ", "which is much different than solving the underlying clean mathematically abstracted problem (as a grid world with obstacles) as done here and evident in the videos. ", "Further it is honestly hard for me to be strongly motivated about a paper that focuses on the need to decompose Pac-man into sub-agents/advisor value functions.", "Section 2 Another historically well-cited paper for MDP decomposition:", "Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems, Ronald Parr. UAI 98. https://dslpitt.org/uai/papers/98/p422-parr.pdf", "Section 3 Is the additive reward decomposition a required part of the problem specification? ", "It seems so, i.e., there is no obvious method for automatically decomposing a monolithic reward function over advisors.", "Section 4 * Egocentric: Definition 1: Sure, the problem will have local optima (attractors) when decomposed suboptimally ", "-- I'm not sure what new insight we've gained from this analysis... ", "it is a general problem with any function approximation scheme that does not guarantee that the rank ordering of actions for a state is preserved.", "* Agnostic Other than approximating some type of myopic rollout, I really don't see why this approach would be reasonable? ", "I am surprised it works at all ", "though my guess is that this could simply be an artifact of evaluating on a single domain with a specific structure.", "* Empathic This appears to be the key contribution ", "though related work certainly infringes on its novelty. ", "Is this paper then an empirical evaluation of previous methods in a single Pac-man grid world variant?", "I wonder if the theory of DEC-MDPs would have any relevance for novel analysis here?", "Section 5 I'm disappointed that the authors only evaluate on a single domain; ", "presumably the empathic approach has applications beyond Pac-Man?", "The fact that empathic generally performs better is not at all surprising. ", "The fact that a modified discount factor for egocentric can also perform well is not surprising given that lower discount factors have often been shown to improve approximated MDP solutions, e.g.,", "Biasing Approximate Dynamic Programming with a Lower Discount Factor Marek Petrik, Bruno Scherrer (NIPS-08). http://marek.petrik.us/pub/Petrik2009a.pdf", "***Side note: The following part is somewhat orthogonal to the review above in that I would not expect the authors to address this on revision, *but* at the same time I think it provides a connection to the special case of concurrent action decomposition into advisors, ", "which could potentially provide a high impact direction of application for this work ", "(i.e., concurrent problems are hard and show up in numerous operations research problems covering inventory control, logistics, epidemic response).", "For the special case that each advisor is assigned to one action in a factored space of concurrent actions, the egocentric algorithm would be very close to the Hindsight approximation in Section 6 of this paper (including an additive decomposition of rewards):", "Planning in Factored Action Spaces with Symbolic Dynamic Programming Aswin Nadamuni Raghavan, Alan Fern, Prasad Tadepalli, Roni Khardon, and Saket Joshi (AAAI-12). https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5012/5336", "This simple algorithm is hard to beat ", "for the following reason that connects some details of your egocentric and empathic settings: rather than decomposing a concurrent MDP into independent problems per concurrent action, the optimization of each action (by each advisor) is done in sequence (advisors are ordered) and gets to condition on the previously selected advisor actions. ", "So it provides an alternate paradigm where advisors actually get to see and condition their policy on what other advisors are doing. ", "In my own work comparing optimal concurrent solutions to this approach, I have found this approach to be near-optimal and much more efficient to solve since it exploits decomposition.", "Why is this relevant to this work? ", "Because (a) it suggests another variant of the advisor decomposition that at least makes sense in the case of concurrent actions (and perhaps shared actions though this would require some extension) ", "and (b) it suggests there are more options than just the full egocentric and empathic settings in this important class of concurrent action problems that are necessarily solved in practice for large action spaces by some form of decomposition. ", "This could be an interesting direction for future exploration of the ideas in this work, where there might be additional technical novelty and more space for empirical contributions and observations." ]
[ "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "reference", "request", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "reference", "non-arg", "evaluation", "evaluation", "evaluation", "reference", "evaluation", "fact", "fact", "non-arg", "non-arg", "fact", "fact", "evaluation" ]
S1ufxZqlG
[ "The authors propose an objective whose Lagrangian dual admits a variety of modern objectives from variational auto-encoders and generative adversarial networks. ", "They describe tradeoffs between flexibility and computation in this objective leading to different approaches. ", "Unfortunately, I'm not sure what specific contributions come out, ", "and the paper seems to meander in derivations and remarks that I didn't understand what the point was.", "First, it's not clear what this proposed generalization offers. ", "It's a very nuanced and not insightful construction (eq. 3) and with a specific choice of a weighted sum of mutual informations subject to a combinatorial number of divergence measure constraints, each possibly held in expectation (eq. 5) to satisfy the chosen subclass of VAEs and GANs; and with or without likelihoods (eq. 7). ", "What specific insights come from this that isn't possible without the proposed generalization?", "It's also not clear with many GAN algorithms that reasoning with their divergence measure in the limit of infinite capacity discriminators is even meaningful ", "(e.g., Arora et al., 2017; ", "Fedus et al., 2017). ", "It's only true for consistent objectives such as MMD-GANs.", "Section 4 seems most pointed in explaining potential insights. ", "However, it only introduces hyperparameters and possible combinatorial choices with no particular guidance in mind. ", "For example, there are no experiments demonstrating the usefulness of this approach except for a toy mixture of Gaussians and binarized MNIST, explaining what is already known with the beta-VAE and infoGAN. ", "It would be useful if the authors could make the paper overall more coherent and targeted to answer specific problems in the literature rather than try to encompass all of them.", "Misc + The \"feature marginal\" is also known as the aggregate posterior (Makhzani et al., 2015) and average encoding distribution (Hoffman and Johnson, 2016); also see Tomczak and Welling (2017)." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "reference", "reference", "fact", "evaluation", "fact", "fact", "request", "fact" ]
S1omqSUSM
[ "This paper proposes to use a hybrid of convolutional and recurrent networks to predict the DSL specification of a GUI given a screenshot of the GUI.", "Pros:The paper is clear ", "and the proposed problem is novel and well-defined.", "The training data is synthetic, allowing for arbitrarily large training sets to be generated. ", "The authors have made their synthetic dataset publicly available.", "The method seems to work well based on the samples and ROC curves presented.", "Cons: This is mostly an application of an existing method to a new domain -- as stated in the related work section, effectively the same convnet+RNN architecture has been in common use for image captioning and other vision applications.", "The UIs that are represented in the dataset seem quite simple; ", "it’s not clear that this will transfer to arbitrarily complex and multi-page UIs.", "The main motivation for the proposed system seems to be for non-technical designers to be able to implement UIs just by drawing a mockup screenshot. ", "However, the paper hasn’t shown that this is necessarily possible assuming the hand-designed mockups aren’t pixel-for-pixel matches with a screenshot that could be generated by the “DSL code -> screenshot” mapping that this system learns to invert.", "There exist a number of “drag and drop” style UI design products (at least for HTML) that would seem to accomplish the same basic goal as the proposed system in a more reliable way. ", "(Though the proposed system does have the advantage of only requiring a screenshot created using any software, rather than being restricted to a particular piece of software.)", "Overall, the paper is well-written but the novelty and applicability seems a bit limited." ]
[ "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation" ]
B129GzFxf
[ "This paper proposes a new method for reverse curriculum generation by gradually reseting the environment in phases and classifying states that tend to lead to success. ", "It additionally proposes a mechanism for learning from human-provided \"key states\".", "The ideas in this paper are quite nice, ", "but the paper has significant issues with regard to clarity and applicability to real-world problems:", "First, it is unclear is the proposed method requires access only high-dimensional observations (e.g. images) during training or if it additionally requires low-dimensional states (e.g. sufficient information to reset the environment). ", "In most compelling problems settings where a low-dimensional representation that sufficiently explains the current state of the world is available during training, then it is also likely that one can write down a nicely shaped reward function using that state information during training, in which case, it makes sense to use such a reward function. ", "This paper seems to require access to low-dimensional states, and specifically considers the sparse-reward setting, ", "which seems contrived.", "Second, the paper states that the assumption \"when resetting, the agent can be reset to any state\" can be satisfied in problems such as real-world robotic manipulation. ", "This is not correct. ", "If the robot could autonomously reset to any state, then we would have largely solved robotic manipulation. ", "Further, it is not always realistic to assume access to low-dimensional state information during training on a real robotic system (e.g. knowing the poses of all of the objects in the world).", "Third, the experiments section lacks crucial information needed to understand the experiments. ", "What is the state, observation, and action space for each problem setting? ", "What is the reward function for each problem setting? ", "What reinforcement learning algorithm is used in combination with the curriculum and tendency rewards? ", "Are the states and actions continuous or discrete? ", "Without this information, it is difficult to judge the merit of the experimental setting.", "Fourth, the proposed method seems to lack motivation, making the proposed scheme seem a bit ad hoc. ", "Could each of the components be motivated further through more discussion and/or ablative studies?", "Finally, the main text of the paper is substantially longer than the recommended page limit. ", "It should be shortened by making the writing more concise.", "Beyond my feedback on clarity and significance, here are further pieces of feedback with regard to the technical content, experiments, and related work:I'm wondering -- can the reward shaping in Equation 2 be made to satisfy the property of not affecting the final policy? ", "(see Ng et al. '09) ", "If so, such a reward shaping would make the method even more appealing.", "How do the experiments in section 5.4 compare to prior methods and ablations? ", "Without such a comparison, it is impossible to judge the performance of the proposed method and the level of difficulty of these tasks. ", "At the very least, the paper should compare the performance of the proposed method to the performance a random policy.", "The paper is missing some highly relevant references. ", "First, how does the proposed method compare to hindsight experience replay? ", "[1] Second, learning from keyframes (rather than demonstrations) has been explored in the past [1]. ", "It would be preferable to use the standard terminology of \"keyframe\".", "[1] Andrychowicz et al. Hindsight Experience Replay. 2017", "[2] Akgun et al. Keyframe-based Learning from Demonstration. 2012", "In summary, I think this paper has a number of promising ideas and experimental results, ", "but given the significant issues in clarity and significance to real world problems, I don't think that the current version of this paper is suitable for publication in ICLR.", "More minor feedback on clarity and correctness:- Abstract: \"Deep RL algorithms have proven successful in a vast variety of domains\" -- This is an overstatement.", "- The introduction should be more clear with regard to the assumptions. ", "In particular, it would be helpful to see discussion of requiring human-provided keyframes. ", "As is, it is unclear what is meant by \"checkpoint scheme\", ", "which is not commonly used terminology.", "- \"This kind of spare reward, goal-oriented tasks are considered the most difficult challenges\" -- This is also an overstatement. ", "Long-horizon tasks and high-dimensional observations are also very difficult. ", "Also, the sentence is not grammatically correct.", "- \"That is, environment\" -> \"That is, the environment\"", "- In the last paragraph of the intro, it would be helpful to more clearly state what the experiments can accomplish. ", "Can they handle raw pixel inputs?", "- \"diverse domains\" -> \"diverse simulated domains\"", "- \"a robotic grasping task\" -> \"a simulated robotic grasping task\"", "- There are a number of issues and errors in citations, e.g. missing the year, including the first name, incorrect reference", "- Assumption 1: \\mathcal{P} has not yet been defined.", "- The last two paragraphs of section 3.2 are very difficult to understand without reading the method yet", "- \"conventional RL solver tend\" -> \"conventional RL tend\", ", "also should mention sparse reward in this sentence.", "- Algorithm 1 and Figure 1 are not referenced in the text anywhere, and should be", "- The text in Figure 1 and Figure 3 is extremely small", "- The text in Figure 3 is extremely small" ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "evaluation", "request", "fact", "request", "request", "reference", "evaluation", "request", "fact", "request", "evaluation", "request", "fact", "request", "reference", "reference", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "request", "request", "request", "fact", "fact", "evaluation", "request", "request", "request", "request", "request" ]
SkSMlWcgG
[ "The paper provides methods for training deep networks using half-precision floating point numbers without losing model accuracy or changing the model hyper-parameters. ", "The main ideas are to use a master copy of weights when updating the weights, scaling the loss before back-prop and using full precision variables to store products. ", "Experiments are performed on a large number of state-of-art deep networks, tasks and datasets ", "which show that the proposed mixed precision training does provide the same accuracy at half the memory.", "Positives - The experimental evaluation is fairly exhaustive on a large number of deep networks, tasks and datasets ", "and the proposed training preserves the accuracy of all the tested networks at half the memory cost.", "Negatives - The overall technical contribution is fairly small and are ideas that are regularly implemented when optimizing systems.", "- The overall advantage is only a 2x reduction in memory which can be gained by using smaller batches at the cost of extra compute." ]
[ "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact" ]
HkCNqISxM
[ "The authors try to use continuous time generalizations of normalizing flows for improving upon VAE-like models or for standard density estimation problems.", "Clarity: the text is mathematically very sloppy / hand-wavy.", "1. I do not understand proposition (1). ", "I do not think that the proof is correct ", "(e.g. the generator L needs to be applied to a function ", "-- the notation L(x) does not make too much sense): ", "indeed, in the case when the volatility is zero (or very small), this proposition would imply that any vector field induces a volume preserving transformation, which is indeed false.", "2. I do not really see how the sequence of minimization Eq(5) helps in practice. ", "The Wasserstein term is difficult to hand.", "3. in Equation (6), I do not really understand what $\\log(\\bar{\\rho})$ is if $\\bar{\\rho}$ is an empirical distribution. ", "One really needs $\\bar{\\rho}$ to be a probability density to make sense of that." ]
[ "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation" ]
BkzesZcxG
[ "The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. ", "This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets.", "A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation.", "This is also not dissimilar to ideas used in 'Bayesian Representation Learning With Oracle Constraints' Karaletsos et al 2016 ", "where similar contextual features c are learned to disentangle representations over observations and implicit supervision.", "The authors provide a clean variational inference algorithm to learn their model. ", "However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference. ", "The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach. ", "In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization.", "This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off. ", "It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings.", "Can the model recover richer structure that was imposed during data generation? ", "How limiting is the learning of a?", "How does the likelihood of the model behave under the circumstances?", "The experiments do not really convey how well this all will work in practice." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "request", "request", "request", "fact" ]
H1Pyl4sxM
[ "Summary of paper: The paper proposes an RNN-based neural network architecture for embedding programs, focusing on the semantics of the program rather than the syntax. ", "The application is to predict errors made by students on programming tasks. ", "This is achieved by creating training data based on program traces obtained by instrumenting the program by adding print statements. ", "The neural network is trained using this program traces with an objective for classifying the student error pattern (e.g. list indexing, branching conditions, looping bounds).", "---Quality: The experiments compare the three proposed neural network architectures with two syntax-based architectures. ", "It would be good to see a comparison with some techniques from Reed & De Freitas (2015) ", "as this work also focuses on semantics-based embeddings.", "Clarity: The paper is clearly written.", "Originality: This work doesn't seem that original from an algorithmic point of view ", "since Reed & De Freitas (2015) and Cai et. al (2017) among others have considered using execution traces. ", "However the application to program repair is novel (as far as I know).", "Significance: This work can be very useful for an educational platform ", "though a limitation is the need for adding instrumentation print statements by hand.", "--- Some questions/comments: - Do we need to add the print statements for any new programs that the students submit? ", "What if the structure of the submitted program doesn't match the structure of the intended solution and hence adding print statements cannot be automated?", "---References Cai, J., Shin, R., & Song, D. (2017). Making Neural Programming Architectures Generalize via Recursion. In International Conference on Learning Representations (ICLR)." ]
[ "fact", "fact", "fact", "fact", "fact", "request", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "request", "request", "reference" ]
S1c4VEXWz
[ "This paper provides an overview of the Deep Voice 3 text-to-speech system. ", "It describes the system in a fair amount of detail and discusses some trade-offs w.r.t. audio quality and computational constraints. ", "Some experimental validation of certain architectural choices is also provided.", "My main concern with this work is that it reads more like a tech report: ", "it describes the workings and design choices behind one particular system in great detail, ", "but often these choices are simply stated as fact and not really motivated, or compared to alternatives. ", "This makes it difficult to tell which of these aspects are crucial to get good performance, and which are just arbitrary choices that happen to work okay.", "As this system was clearly developed with actual deployment in mind (and not purely as an academic pursuit), ", "all of these choices must have been well-deliberated. ", "It is unfortunate that the paper doesn't demonstrate this. ", "I think this makes the work less interesting overall to an ICLR audience. ", "That said, it is perhaps useful to get some insight into what types of models are actually used in practice.", "An exception to this is the comparison of \"converters\", model components that convert the model's internal representation of speech into waveforms. ", "This comparison is particularly interesting ", "because some of the results are remarkable, i.e. Griffin-Lim spectrogram inversion and the WORLD vocoder achieving very similar MOS scores in some cases (Table 2). ", "I wish there would be more of that kind of thing in the paper. ", "The comparison of attention mechanisms is also useful.", "I'm on the fence as I think it is nice to get some insight into a practical pipeline which benefits from many current trends in deep learning research (autoregressive models, monotonic attention, ...), ", "but I also feel that the paper is a bit meager when it comes to motivating all the architectural aspects. ", "I think the paper is well written ", "so I've tentatively recommended acceptance.", "Other comments: - The separation of the \"decoder\" and \"converter\" stage is not entirely clear to me. ", "It seems that the decoder is trained to predict spectrograms autoregressively, but its final layer is then discarded and its hidden representation is then used as input to the converter stage instead? ", "The motivation for doing this is unclear to me, ", "surely it would be better to train everything end-to-end, including the converter? ", "This seems like an unnecessary detour, ", "what's the reasoning behind this?", "- At the bottom of page 2 it is said that \"the whole model is trained end-to-end, excluding the vocoder\", ", "which I think is an unfortunate turn of phrase. ", "It's either end-to-end, or it isn't.", "- In Section 3.3, the point of mixing of h_k and h_e is unclear to me. ", "Why is this done?", "- The gated linear unit in Figure 2a shows that speaker embedding information is only injected in the linear part. ", "Has this been experimentally validated to work better than simpler mechanisms such as adding conditioning-dependent biases/gains?", "- When the decoder is trained to do autoregressive prediction of spectrograms, is it autoregressive only in time, or also in frequency? ", "I'm guessing it's the former, ", "but this means there is an implicit independence assumption ", "(the intensities in different frequency bins are conditionally independent, given all past timesteps). ", "Has this been taken into consideration? ", "Maybe it doesn't matter because the decoder is never used directly anyway, and this is only a \"feature learning\" stage of sorts?", "- Why use the L1 loss on spectrograms?", "- The recent work on Parallel WaveNet may allow for speeding up WaveNet when used as a vocoder, ", "this could be worth looking into seeing as inference speed is used as an argument to choose different vocoder strategies (with poorer audio quality as a result).", "- The title heavily emphasizes that this model can do multi-speaker TTS with many (2000) speakers, ", "but that seems to be only a minor aspect that is only discussed briefly in the paper. ", "And it is also something that preceding systems were already capable of ", "(although maybe it hasn't been tested with a dataset of this size before). ", "It might make sense to rethink the title to emphasize some of the more relevant and novel aspects of this work." ]
[ "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "request", "fact", "request", "request", "evaluation", "fact", "fact", "request", "non-arg", "request", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "request" ]
rJfo7HsxG
[ "The paper proposes a VAE inference network for a non-parametric topic model.", "The model on page 4 is confusing to me ", "since this is a topic model, ", "so document-specific topic distributions are required, ", "but what is shown is only stick-breaking for a mixture model.", "From what I can tell, the model itself is not new, only the fact that a VAE is used to approximate the posterior. ", "In this case, if the model is nonparametric, then comparing with Wang, et al (2011) seems the most relevant non-deep approach. ", "Given the factorization used in that paper, the q distributions are provably optimal by the standard method. ", "Therefore, something must be gained by the VAE due to a non-factorized q. ", "This would be best shown by comparing with the corresponding non-deep version of the model rather than LDA and other deep models." ]
[ "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request" ]
r1zEZ9ief
[ "The paper proposes a method which jointly learns the label embedding (in the form of class similarity) and a classification model. ", "While the motivation of the paper makes sense, ", "the model is not properly justified, ", "and I learned very little after reading the paper.", "There are 5 terms in the proposed objective function. ", "There are also several other parameters associated with them: for example, the label temperature of z_2’’ and and parameter alpha in the second last term etc.", "For all the experiments, the same set of parameters are used, ", "and it is claimed that “the method is robust in our experiment and simply works without fine tuning”. ", "While I agree that a robust and fine-tuning-free model is ideal ", "1) this has to be justified by experiment. ", "2) showing the experiment with different parameters will help us understand the role each component plays. ", "This is perhaps more important than improving the baseline method by a few point, ", "especially given that the goal of this work is not to beat the state-of-the-art." ]
[ "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "quote", "evaluation", "request", "evaluation", "evaluation", "evaluation" ]
SkKuc-Kef
[ "Proposal is to restrict the feasible parameters to ones that have produce a function with small variance over pre-defined groups of images that should be classified the same. ", "As authors note, this constraint can be converted into a KKT style penalty with KKT multiplier lambda. ", "Thus this is very similar to other regularizers that increase smoothness of the function, such as total variation or a graph Laplacian defined with graph edges connecting the examples in each group, as well as manifold regularization (see e.g. Belkin, Niyogi et al. JMLR). ", "Heck, in practie ridge regularization will also do something similar for many function classes. ", "Experiments didn't compare to any similar smoothness regularization ", "(and my preferred would have been a comparison to graph Laplacian or total variation on graphs formed by the same clustered examples). ", "It's also not clear either how important it is that they hand-define the groups over which to minimize variance or if just generally adding smoothness regularization would have achieved the same results. ", "That made it hard to get excited about the results in a vacuum. ", "Would this proposed strategy have thwarted the Russian tank legend problem? ", "Would it have fixed the Google gorilla problem? ", "Why or why not?", "Overall, I found the writing a bit bombastic for a strategy that seems to require the user to hand-define groups/clusters of examples. ", "Page 2: calling additional instances of the same person “counterfactual observations” didn’t seem consistent with the usual definition of that term… ", "maybe I am just missing the semantic link here, ", "but this isn't how we usually use the term counterfactual in my corner of the field.", "Re: “one creates additional samples by modifying…” ", "be nice to quote more of the early work doing this, ", "I believe the first work of this sort was Scholkopf’s, he called it “virtual examples” ", "and I’m pretty sure he specifically did it for rotation MNIST images (and if not exactly that, it was implied). ", "I think the right citation is “Incorporating invariances in support vector learning machines“ Scholkopf, Burges, Vapnik 1996, but also see Decoste * Scholkopf 2002 “Training invariant support vector machines.”" ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "request", "fact", "evaluation", "request" ]
ryBhOOXlM
[ "The authors ask when the hidden layer units of a multi-layer feed-forward neural network will display selectivity to object categories.", "They train 3-layer ANNs to categorize binary patterns,", "and find that typically at least some of the hidden layer units are category selective.", "The number of category selective (\"localist\") units varies depending on the size of the hidden layer, the structure of the outputs the network is trained to return (i.e., one-hot vs distributed), the neurons' activation functions, and the level of dropout-induced noise in the training procedure.", "Overall, I find the work to hint at an interesting phenomenon.", "However, the paper as presented uses an overly-simplistic task for the ANNs,", "and the work is sloppily presented.", "These factors detract from my enthusiasm.", "My specific criticisms are as follows: 1) The binary pattern classification seems overly simplistic a task for this study.", "If you want to compare to the medial temporal lobe's Jennifer Aniston cells (i.e., the Quiroga result), then an object recognition task seems much more meaningful, as does a deeper network structure.", "Likewise, to inform the representations we see in deep object recognition networks, it is better to just study those networks, instead of simple shallow binary classification networks.", "Or, at least show that the findings apply to those richer settings, where the networks do \"real\" tasks.", "2) The paper is somewhat sloppy, and could use a thorough proofreading.", "For example, what are \"figures 3, ?? and 6\"?", "And which is Figure 3.3.1?", "3) What formula is used to quantify the selectivity?", "And do the results depend on the cut-off used to label units as \"selective\" or not (i.e., using a higher or lower cutoff than 0.05)?", "Given that the 0.05 number is somewhat arbitrary, this seems worth checking.", "4) I don't think that very many people would argue that the presence of distributed representations strictly excludes the possibility of some of the units having some category selectivity.", "Consequently, I find the abstract and introduction to be a bit off-putting, coming off almost as a rant against PDP.", "This is a minor stylistic thing, but I'd encourage the authors to tone it down a bit.", "5) The finding that more of the selective units arise in the hidden layer in the presence of higher levels of noise is interesting,", "and the authors provide some nice intuition for this phenomenon (i.e., getting redundant local representations makes the system robust to the dropout).", "This seems interesting in light of the Quiroga findings of Jennifer Aniston cells: the fact that the (small number of) units they happened to record from showed such selectivity suggests that many neurons in the brain would have this selectivity, so there must be a large number of category selective units.", "Does that finding, coupled with the result from Fig. 6, imply that those \"grandmother cell\" observations might reflect an adaptation to increase robustness to noise?" ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "evaluation", "request", "request", "request", "request", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "non-arg" ]
r1ajkbceM
[ "The authors present a new RL algorithm for sparse reward tasks. ", "The work is fairly novel in its approach, ", "combining a learned reward estimator with a contextual bandit algorithm for exploration/exploitation. ", "The paper was mostly clear in its exposition, ", "however some additional information of the motivation for why the said reduction is better than simpler alternatives would help. ", "\\n\\nPros\\n1. The results on bandit structured prediction problems are pretty good\\n", "2. The idea of a learnt credit assignment function, and using that to separate credit assignment from the exploration/exploitation tradeoff is good. ", "\\n\\nCons: \\n1. The method seems fairly more complicated than PPO / A2C, ", "yet those methods seem to perform equally well on the RL problems (Figure 2.). ", "It also seems to be designed only for discrete action spaces.\\n", "2. Reslope Boltzmann performs much worse than Reslope Bootstrap, ", "thus having a bag of policies helps. ", "However, in the comparison in Figures 2 and 3, the policy gradient methods dont have the advantage of using a bag of policies. ", "A fairer comparison would be to compare with methods that use ensembles of Q-functions. ", "(like this https://arxiv.org/abs/1706.01502 by Chen et al.). ", "The Q learning methods in general would also have better sample efficiency than the policy gradient methods.\\n", "3. The method claims to learn an internal representation of a denser reward function for the sparse reward problem, ", "however the experimental analysis of this is pretty limited (Section 5.3). ", "It would be useful to do a more thorough investigation of whether it learnt a good credit assignment function in the games. ", "One way to do this would be to check the qualitative aspects of the function in a well understood game, like Blackjack.\\n\\n", "Suggestions:\\n1. What is the advantage of the method over a simple RL method that predicts a reward at every step (such that the dense rewards add up to match the sparse reward for the episode), and uses this predicted dense reward to perform RL? ", "This, and also a bigger discussion on prior bandit learning methods like LOLS will help under the context for why we\\u2019re performing the reduction stated in the paper.", "\\n\\nSignificance: While the method is novel and interesting, the experimental analysis and the explanations in the paper leave it unclear as to whether its significant compared to prior work." ]
[ "fact", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "request", "reference", "fact", "fact", "evaluation", "request", "request", "non-arg", "request", "evaluation" ]
rkHhxN2lG
[ "This paper focuses on the density estimation when the amount of data available for training is low. ", "The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. ", "The paper presents two independent method.", "The first method is effectively a PixelCNN combined with an attention module. ", "Specifically, the support set is convolved to generate two sets of feature maps, the so called \"key\" and the \"value\" feature maps. ", "The key feature map is used from the model to compute the attention in particular regions in the support images to generate the pixels for the new \"target\" image. ", "The value feature maps are used to copmpute the local encoding, which is used to generate the respective pixels for the new target image, taking into account also the attention values. ", "The second method is simpler, ", "and very similar to fine-tuning the basis network on the few new samples provided during training. ", "Despite some interesting elements, ", "the paper has problems.", "First, the novelty is rather limited. ", "The first method seems to be slightly more novel, ", "although it is unclear whether the contribution by combining different models is significant. ", "The second method is too similar to fine-tuning: ", "although the authors claim that \\mathcal{L}_inner can be any function that minimizes the total loss \\mathcal{L}, ", "in the end it is clear that the log-likelihood is used. ", "How is this approach (much) different from standard fine-tuning, ", "since the quantity P(x; \\theta') is anyways unknown and cannot be \"trained\" to be maximized.", "Besides the limited novelty, ", "the submission leaves several parts unclear. ", "First, why are the convolutional features of the support set in the first methods divided into \"key\" and \"value\" feature maps as in p_key=p[:, 0:P], p_value=p[:, P:2*P]? ", "Is this division arbitrary, or is there a more basic reason? ", "Also, is there any different between key and value? ", "Why not use the same feature map for computing the attention and computing eq (7)?", "Also, in the first model it is suggested that an additional feature can be having a 1-of-K channel for the supporting image label: ", "the reason is that you might have multiple views of objects, and knowing which view contributes to the attention can help learning the density. ", "However, this assumes that the views are ordered, namely that the recording stage has a very particular format. ", "Isn't this a bit unrealistic, given the proposed setup anyways?", "Regarding the second method, it is not clear why leaving this room for flexibility (by allowing L_inner to be any function) to the model is a good idea. ", "Isn't this effectively opening the doors to massive overfitting? ", "Besides, isn't the statement that the function \\mathcal{L}_inner void? ", "At the end of the day one can also claim the same for gradient descent: you don't need to have the true gradients of the true loss, as long as the objective function obtains gradually lower and lower values?", "Last, it is unclear what is the connection between the first and the second model. ", "Are these two independent models that solve the same problem? ", "Or are they connected?", "Regarding the evaluation of the models, the nature of the task makes the evaluation hard: ", "for real data like images one cannot know the true distribution of particular support examples. ", "Surrogate tasks are explored, first image flipping, then likelihood estimation of Omniglot characters, then image generation. ", "Image flipping does not sound a very relevant task to density estimation, given that the task is deterministic. ", "Perhaps, what would make more sense would be to generate a new image given that the support set has images of a particular orientation, meaning that the model must learn how to learn densities from arbitrary rotations. ", "Regarding Omniglot character generation, the surrogate task of computing likelihood of known samples gives a bit better, ", "however, this is to be expected when combining a model without attention, with an attention module.", "All in all, the paper has some interesting ideas. ", "I encourage the authors to work more on their submission and think of a better evaluation and resubmit." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request" ]
r1Kg9atxz
[ "The authors extend the approach proposed in the \"Reverse Curriculum Learning for Reinforcement Learning\" paper by adding a discriminator that gives a bonus reward to a state based on how likely it thinks the current policy is to reach the goal from said state. ", "The discriminator is a potentially interesting mechanism to approximate multi-step backups in sparse-reward environments. ", "The approach of this paper seems severely severely limited by the assumptions made by the authors, mainly assuming a deterministic environment, known goal states and the ability to sample anywhere in the state space. ", "Some of these assumptions may be reasonable in domains such as robotics, ", "but they seem very restrictive in the domains like the games considered in the paper.", "Additional Comments: -The authors demonstrate some benefits of using Tendency rewards, ", "but made little attempt to explain why it leads to accelerated learning. ", "Results are pure performance results.", "-The authors should probably structure the tendency reward as potential based instead of using the Gaussian kernel hack they introduce in section 4.2", "- Presentation: There are several mistakes and formatting issues in References", "- Assumption 2 transformations -> transitions?", "-Need to add assumption 3: advance knowledge of goal state", "- the use of gamma as a scale factor in equation 2 is confusion, ", "it was already introduced as the discount factor ( which is default notation in RL). ", "It also isn't clear what the notation r_f denotes (is it the same as r^f in appendix?).", "-It is nice to see that the authors compare their method with alternative approaches. ", "Unfortunately, the proposed method does not seem to offer many benefits." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "fact", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation" ]
Skoq_d6gz
[ "This paper presents a semi-supervised extension for applying GANs to regression tasks.", "The authors propose two architectures: one adds a supervised regression loss to the standard unsupervised GAN discriminator loss.", "The other replaces the real/fake output of the discriminator with only a real-valued output and then applies a kernel on top of this output to predict if samples are real or fake.", "The methods are evaluated on a public driving dataset,", "and are shown to outperform an Improved-GAN which predicts the real-valued labels discretized into 10 classes.", "This is a nice idea,", "but I am not completely convinced by the experimental results.", "The proposed method is compared to Improved-GAN where the real-valued labels are discretized into 10 classes.", "Why 10?", "How was this chosen?", "The authors rightfully state that \"[...] this discretization will add some unavoidable quantization error to our training\" (Sec 5.2) and then again in the conclusion \"determining the number of [discretization] classes for each application is non-trivial\",", "yet nowhere do they explore the effects of this.", "Surely, this is a very important part of the evaluation?", "And surely as we improve the discretization-resolution the gap between the two will close?", "This needs to be evaluated.", "Also, the main motivation for a GAN-based regression model is based on the paucity of labeled training data.", "However, this is another place where the argument would greatly benefit from some empirical backing.", "I.e., I would really at least like to see how a discriminative regression model (e.g. a pretrained convnet fine-tuned for regression) compares to the proposed technique when trained (fine-tuned) only on the (smaller) labeled data set, perhaps augmented with standard image augmentation techniques to increase the size.", "Overall, I found the paper a little hard to read (especially understanding how Architecture 2 works and moreover what its motivation is)", "and empirical evaluation a bit lacking.", "I also found the claims of \"solving\" the regression task using GANs unfounded based on the experimental results presented.", "In conclusion, while the technique looks promising, the novelty seems fairly low", "and the evaluation can benefit from one or more additional baselines", "(at the very least showing how varying the discretization resolution of the Improved-GAN affects the results, but preferably one or more discriminative baselines),", "and also perhaps on one or more additional data sets to showcase the technique's generality.", "Nits:Several part are quite repetitive and can benefit from a rewrite.", "Particularly the last paragraphs in the Introduction.", "Section 3: notation seems inconsistent (p_z(z) vs P_z(z) directly below in Eqn 1)", "The second architecture needs to be explained a little better, and motivated a little better.", "Eqn 5: I think it should be 0 \\geq \\hat{y}, and not 0 \\leq \\hat{y}" ]
[ "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "non-arg", "non-arg", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "request", "evaluation", "request", "request" ]
HyKlaaFxf
[ "Overall, the paper is well-written ", "and the proposed model is quite intuitive. ", "Specifically, the idea is to represent entailment as a product of continuous functions over possible worlds. ", "Specifically, the idea is to generate possible worlds, and compute the functions that encode entailment in those worlds. ", "The functions themselves are designed as tree neural networks to take advantage of logical structure. ", "Several different encoding benchmarks of the entailment task are designed to compare against the performance of the proposed model, using a newly created dataset. ", "The results seem very impressive with > 99% accuracy on tests sets.", "One weakness with the paper was that it was only tested on 1 dataset. ", "Also, should some form of cross-validation be applied to smooth out variance in the evaluation results. ", "I am not sure if there are standard \"shared\" datasets for this task, ", "which would make the results much stronger.", "Also how about the tradeoff, i.e., does training time significantly increase when we \"imagine\" more worlds. ", "Also, in general, a discussion on the efficiency of training the proposed model as compared to TreeNN would be helpful.", "The size of the world vectors, I would believe is quite important, ", "so maybe a more detailed analysis on how this was chosen is important to replicate the results.", "This problem, I think, is quite related to model counting. ", "There has been a lot of work on model counting. ", "a discussion on how this relates to those lines of work would be interesting." ]
[ "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "non-arg", "request", "request", "request", "evaluation", "request", "evaluation", "evaluation", "request" ]
Bk-lFRWWz
[ "The authors propose reducing the number of parameters learned by a deep network by setting up sparse connection weights in classification layers. ", "Numerical experiments show that such sparse networks can have similar performance to fully connected ones. ", "They introduce a concept of “scatter” that correlates with network performance. ", "Although I found the results useful and potentially promising, ", "I did not find much insight in this paper.", "It was not clear to me why scatter (the way it is defined in the paper) would be a useful performance proxy anywhere but the first classification layer. ", "Once the signals from different windows are intermixed, how do you even define the windows? ", "Minor Second line of Section 2.1: “lesser” -> less or fewer" ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request" ]
Sk6i-Szbz
[ "This paper proposes to use Cross-Corpus training for biomedical relationship extraction from text. ", "- Many wording issues, like citation formats, grammar mistakes, missing words, e.g., Page 2: it as been", "- The description of the methods should be improved. ", "For instance, why the input has only two entities? ", "In many biomedical sentences, there are more than two entities. ", "How can the proposed two models handle these cases? ", "- The paper just presents to train on a larger labeled corpus and test on a task with a smaller labeled set. ", "Why is this novel? ", "Nothing is novel in the deep models (CNN and TreeLSTM). ", "- Missing refs, like: A simple neural network module for relational reasoning, Arxiv 2017" ]
[ "fact", "fact", "request", "request", "fact", "non-arg", "fact", "evaluation", "evaluation", "fact" ]
H1IrTpFxz
[ "The paper addresses the problem of learning the form of the activation functions in neural networks.", "The authors propose to place Gaussian process (GP) priors on the functional form of each activation function (each associated with a hidden layer and unit) in the neural net.", "This somehow allows to non-parametrically infer from the data the \"shape\" of the activation functions needed for a specific problem.", "The paper then proposes an inference framework (to approximately marginalize out all GP functions) based on sparse GP methods that use inducing points and variational inference.", "The inducing point approximation used here is very efficient since all GP functions depend on a scalar input (as any activation function!)", "and therefore by just placing the inducing points in a dense grid gives a fast and accurate representation/compression of all GPs in terms of the inducing function values (denoted by U in the paper).", "Of course then inference involves approximating the finite posterior over inducing function values U", "and the paper make use of the standard Gaussian approximations.", "In general I like the idea", "and I believe that it can lead to a very useful model.", "However, I have found the current paper quite preliminary and incomplete.", "The authors need to address the following:", "First (very important): You need to show experimentally how your method compares against regular neural nets (with specific fixed forms for their activation functions such relus etc).", "At the moment in the last section you mention", "\"We have validated networks of Gaussian Process Neurons in a set of experiments, the details of which we submit in a subsequent publication. In those experiments, our model shows to be significantly less prone to overfitting than a traditional feed-forward network of same size, despite having more parameters.\"", "===> Well all this needs to be included in the same paper.", "Secondly: Discuss the connection with Deep GPs (Damianou and Lawrence 2013).", "Your method seems to be connected with Deep GPs", "although there appear to be important differences as well.", "E.g. you place GPs on the scalar activation functions in an otherwise heavily parametrized neural network (having interconnection weights between layers) while deep GPs model the full hidden layer mapping as a single GP (which does not require interconnection weights).", "Thirdly: You need to better explain the propagation of uncertainly in section 3.2.2 and the central limit of distribution in section 3.2.1.", "This is the technical part of your paper which is a non-standard approximation.", "I will suggest to give a better intuition of the whole idea and move a lot of mathematical details to the appendix." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "quote", "request", "request", "evaluation", "evaluation", "fact", "request", "evaluation", "request" ]
Bk9oIe5gG
[ "The paper investigates different representation learning methods to create a latent space for intrinsic goal generation in guided exploration algorithms.", "The research is in principle very important and interesting.", "The introduction discusses a great deal about intrinsic motivations and about goal generating algorithms.", "This is really great,", "just that the paper only focuses on a very small aspect of learning a state representation in an agent that has no intrinsic motivation other than trying to achieve random goals.", "I think the paper (not only the Intro) could be a bit condensed to more concentrate on the actual contribution.", "The contribution is that the quality of the representation and the sampling of goals is important for the exploration performance and that classical methods like ISOMap are better than Autoencoder-type methods.", "Also, it is written in the Conclusions (and in other places): \"[..] we propose a new intrinsically Motivated goal exploration strategy....\".", "This is not really true.", "There is nothing new with the intrinsically motivated selection of goals here, just that they are in another space.", "Also, there is no intrinsic motivation.", "I also think the title is misleading.", "The paper is in principle interesting.", "However, I doubt that the experimental evaluations are substantial enough for profound conclusion.", "Several points of critic: - the input space was very simple in all experiments, not suitable for distinguishing between the algorithms,", "for instance, ISOMap typically suffers from noise and higher dimensional manifolds, etc.", "- only the ball/arrow was in the input image, not the robotic arm.", "I understand this because in phase 1 the robot would not move,", "but this connects to the next point:- The representation learning is only a preprocessing step requiring a magic first phase.", "-> Representation is not updated during exploration", "- The performance of any algorithm (except FI) in the Arm-Arrow task is really bad but without comment.", "- I am skeptical about the VAE and RFVAE results.", "The difference between Gaussian sampling and the KDE is a bit alarming,", "as the KL in the VAE training is supposed to match the p(z) with N(0,1).", "Given the power of the encoder/decoder it should be possible to properly represent the simple embedded 2D/3D manifold and not just a very small part of it as suggested by Fig 10.", "I have a hard time believing these results.", "I urge you to check for any potential errors made.", "If there are not mistakes then this is indeed alarming.", "Questions: - Is it true that the robot always starts from same initial condition?!", "Context=Emptyset.", "- For ISOMap etc, you also used a 10dim embedding?", "Suggestion: - The main problem seems to be that some algorithms are not representing the whole input space.", "- an additional measure that quantifies the difference between true input distribution and reproduced input distribution could tier the algorithms apart and would measure more what seems to be relevant here.", "One could for instance measure the KL-divergence between the true input and the sampled (reconstructed) input (using samples and KDE or the like).", "- This could be evaluated on many different inputs (also those with a bit more complicated structure) without actually performing the goal finding.", "- BTW: I think Fig 10 is rather illustrative and should be somehow in the main part of the paper", "On the positive side, the paper provides lots of details in the Appendix.", "Also, it uses many different Representation Learning algorithms and uses measures from manifold learning to access their quality.", "In the related literature, in particular concerning the intrinsic motivation, I think the following papers are relevant:J. Schmidhuber, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Front. Psychol., 2013.", "and G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.", "Typos and small details:p3 par2: for PCA you cited Bishop.", "Not critical, but either cite one the original papers or maybe remove the cite altogether", "p4 par-2: has multiple interests...: interests -> purposes?", "p4 par-1: Outcome Space to the agent is is ...", "Sec 2.2 par1: are rapidly mentioned... -> briefly", "Sec 2.3 ...Outcome Space O, we can rewrite the architecture as: and then comes the algorithm.", "This is a bit weird", "Sec 3: par1: experimental campaign -> experiments?", "p7: Context Space: the object was reset to a random position or always to the same position?", "Footnote 14: superior to -> larger than", "p8 par2: Exploration Ratio Ratio_expl... probably also want to add (ER) as it is later used", "Sec 4: slightly underneath -> slightly below", "p9 par1: unfinished sentence: It is worth noting that the....", "one sentence later: RP architecture? RPE?", "Fig 3: the error of the methods (except FI) are really bad.", "An MSE of 1 means hardly any performance!", "p11 par2: for e.g. with the SAGG..... grammar?", "Plots in general: use bigger font sizes." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "fact", "reference", "reference", "fact", "request", "request", "fact", "request", "request", "evaluation", "request", "request", "request", "request", "request", "fact", "request", "evaluation", "evaluation", "request", "request" ]
BJJTve9gM
[ "This paper proposes to adapt convnet representations to new tasks", "while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network", "while keeping the filters themselves fixed.", "Pros The proposed approach is novel and broadly applicable.", "By definition it maintains the exact performance on the original task,", "and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network).", "The method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one.", "A number of different fine-tuning regimes are explored.", "The paper is mostly clear and well-written", "(though with a few typos that should be fixed).", "Cons/Questions/Suggestions The distinction between the convolutional and fully-connected layers (called “classifiers”) in the approach description (sec 3) is somewhat arbitrary", "-- after all, convolutional layers are a generalization of fully-connected layers.", "(This is hinted at by the mention of fully convolutional networks.)", "The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights.", "A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, …, N}) and learning independent weights for the latter N-K layers,", "but I understand this would be a rather large experimental burden.", "When discussing the controller initialization (sec 4.3), it’s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results.", "Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning?", "It’s not immediately clear to me whether this is the case", "-- it could help to clarify this in the text.", "If the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an “identity+noise” initialization of the controller matrix,", "which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation.", "(Equivalently, one could treat the controller-weighted filters as a “residual” term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W\\crossF) rather than just W\\crossF.)", "The dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting.", "It would be interesting to see how this compares with the hard thresholding method used here.", "(As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.)", "Overall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly." ]
[ "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "non-arg", "evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation" ]
Sy8Kdltgz
[ "This paper proposes a method of learning sparse dictionary learning by introducing new types of priors. ", "Specifically, they designed a novel idea of defining a metric to measure discriminative properties along with the quality of presentations.", "It is also presented the power of the proposed method in comparison with the existing methods in the literature.", "Overall, the paper deals with an important issue in dictionary learning and proposes a novel idea of utilizing a set of priors. ", "To this reviewer’s understanding, the thresholding parameter $\\tau_{c}$ is specific for a class $c$ only, ", "thus different classes have different $\\tau$ vectors. ", "If so, Eq. (6) for approximation of the measure $D(\\cdot)$ is not clear how the similarity measure between ${\\bf y}_{c,k}$ and ${\\bf y}_{c1,k1}$, \\ie, $\\left\\|{\\bf y}_{c,k}^{+}\\odot{\\bf y}_{c1,k1}^{+}\\right\\|_{1}+\\left\\|{\\bf y}_{c,k}^{+}\\odot{\\bf y}_{c1,k1}^{+}\\right\\|_{1}$ and $\\left\\|{\\bf y}_{c,k}\\odot{\\bf y}_{c1,k1}\\right\\|_{2}^{2}$, works to approximate it. ", "It would be appreciated to give more detailed description on it and geometric illustration, if possible.", "There are many typos and grammatical errors, ", "which distract from reading and understanding the manuscript." ]
[ "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "request", "fact", "evaluation" ]
S12o7fqlM
[ "This paper tackles the task of learning embeddings of multi-relational graphs using a neural network.", "As much of previous work, the proposed architecture works on triples (h, r, t) wth h, t entities and r the relation type.", "Despite interesting experimental results, I find that the paper carries too many imprecisions as is.", "* One of the main originality of the approach is to be able for a given input triple to train by sequentially removing in turn the head h, then the tail t and finally the relation r.", "(called multi-shot in the paper).", "However, most (if not all) approaches learning embeddings of multi-relational graphs also create multiple examples given a triple.", "And that, at least since \"Learning Structured Embeddings of Knowledge Bases\" by Bordes et al. 2011 that was predicting h and t (not r).", "The only difference is that here it is done sequentially", "while most methods sample one case each time.", "Not really meaningful or at least not proved meaningful here.", "* The sequential/RNN-like structure is unclear and it is hard to see how it relates to the data.", "* Writing that the proposed method \"unsupervised, which is distinctly different from previous works\" is not true or should be rephrased.", "The only difference comes from that the prediction function (softmax and not ranking for instance) and the loss used.", "But none of the methods compared in the experiments use more information than GEN (the original graph).", "GEN is not the only model using a softmax by the way.", "* The fact of predicting indistinctly a fact or its reverse seems rather worrying to me.", "Predicting that \"John is_father_of Paul\" or that \"John is_child_of Paul\" is not the same..!", "How is assessed the fact that a prediction is conceptually correct?", "Using types?", "* The bottom part of Table 2 is surprising.", "How come for the task of predicting Head, the model trained only at predicting heads (GEN(t,r => h)) performs worse than the model trained only at predicting tails (GEN(h,r => t))?" ]
[ "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "fact", "fact", "fact", "evaluation", "fact", "request", "non-arg", "evaluation", "fact" ]
BynUQBQZM
[ "This paper proposes a regularization to the softmax layer, which try to make the distribution of feature representation (inputs fed to the softmax layer) more meaningful according to the Euclidean distance.", "The proposed isotropic loss in equation 3 tries to equalize the squared distances from each point to the mean,", "so the features are encouraged to lie close to a sphere.", "Overall, the proposed method is a relatively simple tweak to softmax.", "The authors show that empirically, features learned under softmax loss + isotropic regularization outperforms other features in Euclidean metric-based tasks.", "My main concern with this paper is the motivation:", "what are the practical scenarios in which one would want to used proposed method?", "1. It is true that features learned with the pure softmax loss may not presents the ideal similarity under the Euclidean metric (e.g. the problem depicted in Figure 1),", "because they are not trained to do so:", "their purpose is just to predict the correct label.", "While the proposed regularization does lead to a nicer Euclidean geometry,", "there is not sufficient motivation and evidence showing this regularization improves classification accuracy.", "2. In table 2, the authors seem to indicate that not using the label information in the definition of Isotropic loss is an advantage.", "But this does not matter", "since you already use the labels in the softmax loss.", "3. I can not easily think of scenarios in which, we would like to perform KNN in the feature space (Table 3) after training a softmax layer.", "In fact, Table 3 shows KNN is almost always worse than softmax in terms of classification accuracy.", "4. Running kmeans or agglomerative clustering in the feature space (Table 5) *using the Euclidean metric* is again ill-posed,", "because the softmax layer is not trained to do this.", "If one really wants good clustering performance, one shall always try to learn a good metric, or ,", "why do not you perform clustering on the softmax output (a probability vector?)", "5. The experiments on adversarial robustness and face verification seems more interesting to me,", "but the tasks were not carefully explained for someone not familiar with that literature.", "Perhaps for these tasks, multi-class classification is not the most correct objective, and maybe the proposed regularization can help,", "but the motivations are not given." ]
[ "fact", "fact", "fact", "evaluation", "fact", "evaluation", "non-arg", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact" ]
ryT2f8KgM
[ "This paper continues a trend of incremental improvements to Wasserstein GANs (WGAN), ", "where the latter were proposed in order to alleviate the difficulties encountered in training GANs. ", "Originally, Arjovsky et al. [1] argued that the Wasserstein distance was superior to many others typically used for GANs. ", "An important feature of WGANs is the requirement for the discriminator to be 1-Lipschitz, ", "which [1] achieved simply by clipping the network weights. ", "Recently, Gulrajani et al. [2] proposed a gradient penalty \"encouraging\" the discriminator to be 1-Lipschitz. ", "However, their approach estimated continuity on points between the generated and the real samples, ", "and thus could fail to guarantee Lipschitz-ness at the early training stages. ", "The paper under review overcomes this drawback by estimating the continuity on perturbations of the real samples. ", "Together with various technical improvements, this leads to state-of-the-art practical performance both in terms of generated images and in semi-supervised learning. ", "In terms of novelty, the paper provides one core conceptual idea followed by several tweaks aimed at improving the practical performance of GANs. ", "The key conceptual idea is to perturb each data point twice and use a Lipschitz constant to bound the difference in the discriminator’s response on the perturbed points. ", "The proposed method is used in eq. (6) together with the gradient penalty from [2]. ", "The authors found that directly perturbing the data with Gaussian noise led to inferior results ", "and therefore propose to perturb the hidden layers using dropout. ", "For supervised learning they demonstrate less overfitting for both MNIST and CIFAR 10. ", "They also extend their framework to the semi-supervised setting of Salismans et al 2016 and report improved image generation. ", "The authors do an excellent comparative job in presenting their experiments. ", "They compare numerous techniques (e.g., Gaussian noise, dropout) and demonstrates the applicability of the approach for a wide range of tasks. ", "They use several criteria to evaluate their performance (images, inception score, semi-supervised learning, overfitting, weight histogram) and compare against a wide range of competing papers. ", "Where the paper could perhaps be slightly improved is writing clarity. ", "In particular, the discussion of M and M' is vital to the point of the paper, ", "but could be written in a more transparent manner. ", "The same goes for the semi-supervised experiment details and the CIFAR-10 augmentation process. ", "Finally, the title seems uninformative. ", "Almost all progress is incremental, ", "and the authors modestly give credit to both [1] and [2], ", "but the title is neither memorable nor useful in expressing the novel idea. ", "[1] Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein gan.", "[2] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans." ]
[ "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "request", "evaluation", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "reference", "reference" ]
r1OoL_Yxz
[ "The authors suggest using a mixture of shared and individual rewards within a MARL environment to induce cooperation among independent agents.", "They show that on their specific application this can lead to a better overall global performance than purely sharing the global signal, or using just the independent rewards.", "The paper is a little too focused on the packet routing example domain and fails to deliver much in terms of a general theory of reward design for cooperative behaviours beyond showing that mixed rewards can lead to improved results in their domain.", "They discuss what and how rewards,", "and this could be made more formal, as well as (at the very least) some guiding principles to follow when mixing rewards.", "It feels like there is a missing section between sections 2 and 3, where this methodological content could be described.", "The rest of the paper has similar issues, with key intuition and concepts either missing entirely or under-represented.", "The technical content often assumes that the reader is familiar with certain terms,", "and it is difficult to see what meaningful conclusions can be drawn from the evaluation.", "On a minor note, the use of the term cooperative in this paper could be better defined.", "In game theory, cooperative games are those in which agents share rewards.", "Non-cooperative (game theory) games are those where agents have general reward signals (not necessarily cooperature or adversarial).", "Conventionally (yes there is existing reward design/shaping literature for MARL) people have used the same terms in MARL.", "Perhaps the authors could define their approach as weakly cooperative, or emergent cooperation.", "The related work could be better described.", "There are existing papers on MARL and the issues with cooperation among independent learners,", "and this could be referenced.", "This includes reward shaping and reward potential.", "I would also have expected to see brief mention of empowerment in this section too (the agent favouring states where it has the power to control outcomes in an information theoretic sense), as an underyling principle for intrinsic reward.", "However, more importantly, the authors really needed to do more to synthesize this into an overall picture of what principles are at play and what ideas/methods exist that have tried to exploit some of these principles.", "Detailed comments: • [p2] the authors say \"We set the meta reward signals as 1 - max(U l ).\", before they define what U_l is.", "• [p2] we have \"As many applications in the real world can be modeled using similar methods, we expect that other fields can also benefit from this work.\"", "This statement is too vague,", "and the authors could do more to identify which application areas might benefit.", "• [p3, first para] \"However, the reward design studies for MARL is so limited.\"", "Drop the word 'so'.", "Also, I would argue that there have been quite a few (non-deep) discussions about reward design in MARL, cooperative, non-cooperative and competitive domains.", "• [p3, sec 2.2] \"This makes the diligent agents confuse about...\"", "should be \"confused\", and I would advise against anthropomorphism at least when the meaning is obscured.", "• [p3, sec 3] \"After having considered several other options, we finally choose the Packet Routing Domain as our experimental environments.\"", "Not sure what useful information is being conveyed here.", "• [sec 3] THe domain could be better described with intuition and formal descriptions, e.g. link utilization ratio, etc, before.", "• [p6] \"Importantly, the proposed blR seems to have similar capacity with dlR,\"", "The discussion here is all in terms of the reward acronyms with very little call on intuition or other such assistance to the reader.", "• [p7] \"We firstly try gR without any thinking\"", "The language could be better here." ]
[ "fact", "fact", "evaluation", "fact", "request", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "request", "request", "request", "fact", "quote", "evaluation", "request", "quote", "request", "evaluation", "quote", "request", "quote", "evaluation", "request", "quote", "evaluation", "quote", "request" ]
SyJaBw1eG
[ "Summary: The paper considers second-order optimization methods for training of neural networks.", "In particular, the contribution of the paper is a Hessian-free method that works on blocks of parameters ", "(this is a user defined splitting of the parameters in blocks, e.g., parameters of each layer is one block, or parameters in several layers could constitute a block). ", "This results into a block-diagonal approximation to the curvature matrix, in order to improve Hessian-free convergence properties: ", "in the latter, a single step might require many CG steps, ", "so the benefit from using second-order information is not apparent.", "This is mainly an experimental work, ", "where the authors show the merits of their approach on deep autoencoders, convolutional networks and LSTMs: ", "results show favourable performance compared to the original Hessian-free approach and the Adam method.", "Originality: The paper is based on the works of Collobert (2004) and Le Roux et al. (2008), as well as the work of Martens: ", "the twist is that each layer of the neural network is considered a parameter block, ", "so that gradient interactions among weights in a single layer are more useful than those between weights in different layers. ", "This increases the separability of the problem and reduces the complexity. ", "Importance: Understanding the difference between first- and second-order methods for NN training is an important topic. ", "Using second-order methods could be considered at its infancy, compared to the wide variety of first-order methods. ", "Having new results on second-order methods with interesting results would definitely attract some attention at the conference. ", "Presentation/Clarity: The paper is well structured and well written. ", "The authors clearly place their work w.r.t. state of the art and previous works, ", "so that it is clear what is new and what is known.", "Comments: 1. It is not clear why the deficiency of first-order methods on training NNs with big batches motivates us to turn into second-order methods. ", "Is there a reasoning for this statement? ", "Or is it just because second-order methods are kind-of the only other alternative we have?", "2. Assuming we can perform a second-order method, like Newton's method, on a deep NN. ", "Since originally Newton's method was designed to find solutions that have gradient equal to zero, ", "and since NNs have saddle points (probably many more than local minima), ", "even if we could perfectly perform second-order Newton motions, there is no guarantee whether we converge to a local minimum or a saddle point. ", "However, since we perform Newton's method approximately in practice, ", "this might help escaping saddle points. ", "Any comment on this aspect ", "(I'm not aware whether this is already commented in Schraudolph 2002, where the Gauss-Newton matrix was proposed instead of the Hessian)?" ]
[ "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "non-arg" ]
B1pcOYBlG
[ "Quality This paper demonstrates that convolutional and relational neural networks fail to solve visual relation problems by training networks on artificially generated visual relation data. ", "This points at important limitations of current neural network architectures where architectures depend mainly on rote memorization.", "Clarity The rationale in the paper is straightforward. ", "I do think that breakdown of networks by testing on increasing image variability is expected given that there is no reason that networks should generalize well to parts of input space that were never encountered before.", "Originality While others have pointed out limitations before, ", "this paper considers relational networks for the first time.", "Significance This work demonstrates failures of relational networks on relational tasks, ", "which is an important message. ", "At the same time, no new architectures are presented to address these limitations.", "Pros Important message about network limitations.", "Cons Straightforward testing of network performance on specific visual relation tasks. ", "No new theory development. ", "Conclusions drawn by testing on out of sample data may not be completely valid." ]
[ "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact" ]
ry6owJ9lM
[ "This paper introduces a simple extension to parallelize Hyperband. ", "Points in favor of the paper:* Addresses an important problem", "Points against:* Only 5-fold speedup by parallelization with 5 x 25 workers, and worse performance in the same budget than Google Vizier (even though that treats the problem as a black box)", "* Limited methodological contribution/novelty", "The paper's methodological contribution is quite limited: ", "it amounts to a straight-forward parallelization of successive halving (SHA). ", "Specifically, whenever a worker frees up, do a new run on it, at the highest rung possible while making sure to not run too many runs for too high rungs. ", "(I am pretty sure that is the idea, even though Algorithm 1, which is supposed to give the details, appears to have a bug in Procedure get_job ", "-- it would always either pick the highest rung or the lowest!)", "Empirically, the paper strangely does not actually evaluate a parallel version of Hyperband, but only evaluates the 5 parallel variants of SHA that Hyperband would run, each of them with all workers. ", "The experiments in Section 4.2 show that, using 25 workers, the best of these 5 variants obtains a 5-fold speedup over sequential Hyperband on CIFAR and an 8-fold speedup on SVHN. ", "I am confused: the *best* of 5 SHA variants only achieves a 5-fold speedup using 25 workers? ", "I.e., parallel Hyperband, which would run the 5 SHA variants in parallel, would require 125 workers but only yield a 5-fold speedup? ", "If I understand this correctly, I would clearly call this a negative result.", "Likewise, for the large-scale experiment, a single run of Vizier actually yields as good performance as the best of the 5 SHA variants, ", "and it is unknown beforehand which SHA variant works best -- in this example, actually Bracket 0 (which is often the best) stagnates. ", "Parallel Hyperband would run the 5 SHA variants in parallel, ", "so its performance at a budget of 10R with a total of 500 workers can be evaluated by taking the minimum of the 5 SHA variants at a budget of 2R. ", "This would obtain a perplexity of above 90, ", "which is quite a bit worse than Vizier's result of about 82. ", "In general, the performance of parallel Hyperband can be computed by taking the minimum of the SHA variants and multiplying the time taken by 5; ", "this shows that at any time in the plot (Figure 3, left) Vizier dominates parallel Hyperband. ", "Again, this is apparently a negative result. ", "(For Figure 3, right, no results for Vizier are given yet.)", "If I understand correctly, the experiment in Section 4.4 does not involve any run of Hyperband, but merely plots predictions of Qi et al.'s Paelo framework of how many models could be evaluated with a growing number of GPUs.", "Therefore, all empirical results for parallel Hyperband reported in the paper appear to be negative. ", "This confuses me, ", "especially since the authors seem to take them as positive results. ", "Because the original Hyperband paper argued that Bayesian optimization does not parallelize as well as random search / Hyperband, ", "and because Hyperband has been reported to work much better than Bayesian optimization on a single node, ", "I would have expected clear improvements of parallel Hyperband over parallel Bayesian optimization (=Vizier in the authors' setup). ", "However, this is not what I see in the results. ", "Am I mistaken somewhere? ", "If not, based on these negative results the paper does not seem to quite clear the bar for ICLR.", "Details, in order of appearance in the paper:- Vizier: why did the authors only use Vizier's default Bayesian optimization algorithm? ", "The Vizier paper by Golovin et al (2017) states that for large budgets other optimizers often perform better, and the budget in the large scale experiments is as high as 5000 function evaluations. ", "Also, isn't there an automatic choice built into Vizier to pick the optimizer expected to be best? ", "I think using a suboptimal version of Vizier would be a problem for the experimental setup.", "- Algorithm 1: this needs some improvement; in particular fixing the bug I mentioned above.", "- Section 3.1: Li et al (2017) do not analyze any algorithm theoretically. ", "They also do not discuss finite vs. infinite horizon. ", "I believe the authors meant Li et al's arXiv paper (2016) in both of these cases.", "- Section 3.1, point 2: this is unclear to me, even though I know Hyperband very well. ", "Can you please make this clearer?", "- \"A complete theoretical treatment of asynchronous SHA is out of the scope of this paper\" ", "-> is some theoretical treatment in scope?", "- Section 4.1: It seems very useful to already recommend configurations in each rung of Hyperband, ", "and I am surprised that the methods section does not mention this. ", "From the text in this experiments section, it feels a little like that was always part of Hyperband; ", "I didn't think it was, ", "so I checked the original papers and blog posts, ", "and both the ICLR 2017 and the arXiv 2016 paper state \"In fact, the first result returned by HYPERBAND after using a budget of 5R is often competitive with results returned by other searchers after using 50R.\" ", "and Kevin Jamieson's blog post on Hyperband (https://people.eecs.berkeley.edu/~kjamieson/hyperband.html) explicitly states: \"While random and the Bayesian Optimization algorithms output their first recommendation after max_iter iterations, Hyperband does not output anything until about max_iter(logeta(max_iter)+1) iterations [...]\"", "Therefore, recommending after each rung seems to be a contribution of this paper, ", "and I think it would be nice to read about this in the methods section. ", "- Experiment 1 (SVM) used dataset size as a budget, which is what Fabolas (\"Fast Bayesian optimization on large datasets\") is designed for according to Klein et al (2017). ", "On the other hand, Experiments (2) and (3) used the number of epochs as a budget, and Fabolas is not designed for that ", "(one would want to use a different kernel, for epochs, e.g., like Freeze-Thaw Bayesian optimization (FTBO) by Swersky et al (2014), instead of a kernel made for dataset sizes). ", "Therefore, it is not surprising that Fabolas does not work as well in those cases. ", "The case of number of epochs as a budget would be the domain of FTBO. ", "I know that there is no reference implementation of FTBO, ", "so I am not asking for a comparison, but the comparison against Fabolas is misleading for Experiments (2) and (3). ", "This doesn't really change anything for the paper: ", "the authors could still make the case that Fabolas hasn't been designed for this case and that (to the best of my knowledge) there simply isn't an implementation of a BO algorithm that is. ", "Fabolas is arguably the closest thing, ", "so the results could still be reported, just not as an apples-to-apples comparison; probably best as \"Fabolas-like, with dataset size kernel\" in the figure. ", "The justification to not compare against Fabolas in the parallel regime is clearly valid.", "- A clarification question: Section 4.4 does not report on any runs of actual neural networks, does it? ", "And not on any runs of Hyperband, correct? ", "Do I understand the reasoning correctly as pointing out that standard parallelization across multiple GPUs is not great, and that thus, in combination with parallel Hyperband, runs should be done mostly on one GPU only? ", "How does this relate to the results in the cited paper \"Accurate, Large-batch SGD: Training ImageNet in 1 Hour\" (https://arxiv.org/abs/1706.02677)? ", "Quoting from its abstract: \"Using commodity hardware, our implementation achieves ∼ 90% scaling efficiency when moving from 8 to 256 GPUs.\" ", "That seems like a very good utilization of parallel computing power?", "- There is no conclusion / future work." ]
[ "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "evaluation", "request", "quote", "request", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "fact", "fact", "fact", "request", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "quote", "evaluation", "fact" ]
HkeBFwYgf
[ "This paper introduces a new toolbox for deep neural networks learning and evaluation.", "The central idea is to include time in the processing of all the units in the network.", "For this, the authors propose a paradigm switch: form layerwise-sequential networks, where at every time frame the network is evaluated by updating each layer – from bottom to top – sequentially; to layerwise-parallel networks, where all the neurons are updated in parallel.", "The new paradigm implies that the layer update is achieved by using the stored previous state and the corresponding previous state of the previous layer.", "This has three consequences.", "First, every layer now use memory,", "a condition that already applies for RNNs in layerwise-sequential networks.", "Second, in order to have a consistent output, the information has to flow in the network for a number of time frames equal to the number of layers.", "In Neuroscience, this concept is known as reaction time.", "Third, since the network is not synchronized in terms of the information that is processed in a specific time frame, there are discrepancies w.r.t. the layerwise-sequential networks computation: all the techniques used to train deep NNs have to be reconsidered.", "Overall, the concept is interesting and timely especially for the rising field of spiking neural networks or for large and distributed architectures.", "The paper, however, should probably provide more examples and results in terms of architectures that can been implemented with the toolbox in comparison with other toolboxes.", "The paper presents a single example in which either the accuracy and the training time are not reported.", "While I understand that the main result of this work is the toolbox itself, more examples and results would improve the clarity and the implications for such paradigm switch.", "Another concern comes from the choice to use Theano as back-end,", "since it's known that it is going to be discontinued.", "Finally I suggest to improve the clarity and description of Figure 2,", "which is messy and confusing especially if printed in B&W." ]
[ "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "fact", "fact", "request", "evaluation" ]
SyrOMN9eM
[ "The authors propose WAGE, which discretized weights, activations, gradients, and errors at both training and testing time. ", "By quantization and shifting, SGD training without momentum, and removing the softmax at output layer as well, the model managed to remove all cumbersome computations from every aspect of the model, ", "thus eliminating the need for a floating point unit completely. ", "Moreover, by keeping up to 8-bit accuracy, the model performs even better than previously proposed models. ", "I am eager to see a hardware realization for this method because of its promising results. ", "The model makes a unified discretization scheme for 4 different kinds of components, ", "and the accuracy for each of the kind becomes independently adjustable. ", "This makes the method quite flexible and has the potential to extend to more complicated networks, such as attention or memory. ", "One caveat is that there seem to be some conflictions in the results shown in Table 1, especially ImageNet. ", "Given the number of bits each of the WAGE components asked for, a 28.5% top 5 error rate seems even lower than XNOR. ", "I suspect it is due to the fact that gradients and errors need higher accuracy for real-valued input, ", "but if that is the case, accuracies on SVHN and CIFAR-10 should also reflect that. ", "Or, maybe it is due to hyperparameter setting or insufficient training time?", "Also, dropout seems not conflicting with the discretization. ", "If there are no other reasons, it would make sense to preserve the dropout in the network as well.", "In general, the paper was writ ten in good quality and in detail, ", "I would recommend a clear accept." ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation" ]
HkKWeUCef
[ "Adversarial example is studied on one synthetic data.", "A neural networks classifier is trained on this synthetic data. ", "Average distances and norms of errorneous perturbations are computed. ", "It is observed that small perturbation (chosen in a right direction) is sufficient to cause misclassification. ", "CONS: The writing is bad and hard to follow, with typos: ", "for example what is a period just before section 3.1 for? ", "Another example is \"Red lines indicate the range of needed for perfect classification\", which does not make sense. ", "Yet another example is the period at the end of Proposition 4.1. ", "Another example is \"One counter-intuitive property of adversarial examples is it that nearly \". ", "It looks as if the paper was written in a hurry, and it shows in the writing. ", "At the beginning of Section 3, Figure 1 is discussed. ", "It points out that there exists adversarial directions that are very bad. ", "But I don't see how it is relevant to adversarial examples. ", "If one was interested in studying adversarial examples, then one would have done the following. ", "Under the setting of Figure 1, pick a test data randomly from the distribution (and one of the classes), and find an adversarial direction", "I do not see how Section 3.1 fits in with other parts of the paper. ", "Is it related to any experiment? ", "Why it defining a manifold attack?", "Putting a \"conjecture\" on a paper has to be accompanied by the depth of the insight that brought the conjecture. ", "Having an unjustified conjecture 5.1 would poison the field of adversarial examples, ", "and it must be removed.", "This paper is a list of experiments and observations, that are not coherent and does not give much insight into the topics of \"adversarial examples\". ", "The only main messages are that on ONE synthetic dataset, random perturbation does not cause misclassification and targeted classification can cause misclassification. ", "And, expected loss is good while worst-case loss is bad. ", "This, in my opinion, is not enough to be published at a conference." ]
[ "fact", "fact", "fact", "fact", "evaluation", "request", "request", "request", "request", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation" ]
BkC87_Cgz
[ "The paper proposes to augment (traditional) text-based sentence generation/dialogue approaches by incorporating visual information. ", "The idea is that associating visual information with input text, and using that associated visual information as additional input will produce better output text than using only the original input text.", "The basic idea is to collect a bunch of data consisting of both text and associated images or video. ", "Here, this was done using Japanese news programs. ", "The text+image/video is used to train a model that requires both as input and that encodes both as context vectors, which are then combined and decoded into output text. ", "Next, the image inputs are eliminated, with the encoded image context vector being instead associatively predicted directly from the encoded text context vector (why not also use the input text to help predict the visual context?), which is still obtained from the text input, as before. ", "The result is a model that can make use of the text-visual associations without needing visual stimuli. ", "This is a nice idea.", "Actually, based on the brief discussion in Section 2.2.2, it occurs to me that the model might not really be learning visual context vectors associatively, or, that this doesn't really have meaning in some sense. ", "Does it make sense to say that what it is really doing is just learning to associate other concepts/words with the input text, and that it is using the augmenting visual information in the training data to provide those associations? ", "Is this worth talking about?", "Unfortunately, while the idea has merit, and I'd like to see it pursued, ", "the paper suffers from a fatal lack of validation/evaluation, ", "which is very curious, given the amount of data that was collected, the fact that the authors have both a training and a test set, and that there are several natural ways such an evaluation might be performed. ", "The two examples of Fig 3 and the additional four examples in the appendix are nice for demonstrating some specific successes or weaknesses of the model, ", "but they are in no way sufficient for evaluation of the system, to demonstrate its accuracy or value in general.", "Perhaps the most obvious thing that should be done is to report the model's accuracy for reproducing the news dialogue, that is, how accurately is the next sentence predicted by the baseline and ACM models over the training instances and over the test data? ", "How does this compare with other state-of-the-art models for dialogue generation trained on this data (perhaps trained only on the textual part of the data in some cases)?", "Second, some measure of accuracy for recall of the associative image context vector should be reported; for example, on average, how close (cosine similarity or some other appropriate measure) is the associatively recalled image context vector to the target image context vector? ", "On average? ", "Best case? ", "Worst case? ", "How often is this associative vector closer to a confounding image vector than an appropriate one?", "A third natural kind of validation would be some form of study employing human subjects to test it's quality as a generator of dialogue.", "One thing to note, the example of learning to associate the snowy image with the text about university entrance exams demonstrates that the model is memorizing rather than generalizing. ", "In general, this is a false association ", "(that is, in general, there is no reason that snow should be associated with exams on the 14th and 15th—the month is not mentioned, which might justify such an association.)", "Another thought: did you try not retraining the decoder and attention mechanisms for step 3? ", "In theory, if step 2 is successful, the retraining should not be necessary. ", "To the extent that it is necessary, step 2 has failed to accurately predict visual context from text. ", "This seems like an interesting avenue to explore (and is obviously related to the second type of validation suggested above). ", "Also, in addition to the baseline model, it seems like it would be good to compare a model that uses actual visual input and the model of step 1 against the model of step 3 (possibly bot retrained and not retrained) to see the effect on the outputs generated—how well do each of these do at predicting the next sentence on both training and test sets?", "Other concerns:1. The paper is too long by almost a page in main content.", "2. The paper exhibits significant English grammar and usage issues ", "and should be carefully proofed by a native speaker.", "3. There are lots of undefined variables in the Eqs. (s, W_s, W_c, b_s, e_t,i, etc.) ", "Given the context and associated discussion, it is almost possible to sort out what all of them mean, ", "but brief careful definitions should be given for clarity. ", "4. Using news broadcasts as a substitute for true dialogue data seems kind of problematic, ", "though I see why it was done." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "request", "fact", "fact", "fact", "non-arg", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "evaluation" ]
Byqj1QtlM
[ "A DeepRL algorithm is presented that represents distributions over Q values, as applied to DDPG, and in conjunction with distributed evaluation across multiple actors, prioritized experience replay, and N-step look-aheads.", "The algorithm is called Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG.", "SOTA results are generated for a number of challenging continuous domain learning problems, as compared to benchmarks that include DDPG and PPO, in terms of wall-clock time, and also (most often) in terms of sample efficiency.", "pros/cons + the paper provides a thorough investigation of the distributional approach, as applied to difficult continuous action problems, and in conjunction with a set of other improvements (with ablation tests)", "- the story is a bit mixed in terms of the benefits, as compared to the non-distributional approach, D3PG", "- it is not clear which of the baselines are covered in detail in the cited paper:", "\"Anonymous. Distributed prioritized experience replay. In submission, 2017.\",", "i.e., should readers assume that D3PG already exists and is attributable to this other submission?", "Overall, I believe that the community will find this to be interesting work.", "Is a video of the results available?", "It seems that the distributional model often does not make much of a difference, as compared to D3PG non-prioritized.", "However, sometimes it does make a big difference, i.e., 3D parkour; acrobot.", "Do the examples where it yields the largest payoff share a particular characteristic?", "The benefit of the distributional models is quite different between the 1-step and 5-step versions.", "Any ideas why?", "Occasionally, D4PG with N=1 fails very badly, e.g., fish, manipulator (bring ball), swimmer.", "Why would that be?", "Shouldn't it do at least as well as D3PG in general?", "How many atoms are used for the categorical representation?", "As many as [Bellemare et al.], i.e., 51 ?", "How much \"resolution\" is necessary here in order to gain most of the benefits of the distributional representation?", "As far as I understand, V_min and V_max are not the global values, but are specific to the current distribution.", "Hence the need for the projection.", "Is that correct?", "Would increasing the exploration noise result in a larger benefit for the distributional approach?", "Figure 2: DDPG performs suprisingly poorly in most examples.", "Any comments on this, or is DDPG best avoided in normal circumstances for continuous problems? :-)", "Is the humanoid stand so easy because of large (or unlimited) torque limits?", "The wall-clock times are for a cluster with K=32 cores for Figure 1?", "\"we utilize a network architecture as specified in Figure 1 which processes the terrain info in order to reduce its dimensionality\"", "Figure 1 provides no information about the reduced dimensionality of the terrain representation,", "unless I am somehow failing to see this.", "\"the full critic architecture is completed by attaching a critic head as defined in Section A\"", "I could find no further documenation in the paper with regard to the \"head\" or a separate critic for the \"head\".", "It is not clear to me why multiple critics are needed.", "Do you have an intuition as to why prioritized replay might be reducing performance in many cases?" ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "reference", "request", "evaluation", "non-arg", "evaluation", "fact", "non-arg", "fact", "non-arg", "fact", "non-arg", "evaluation", "non-arg", "non-arg", "non-arg", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "evaluation", "non-arg", "non-arg", "quote", "fact", "evaluation", "quote", "evaluation", "evaluation", "non-arg" ]
BJAg3e7ZM
[ "1) Summary This paper proposes a flow-based neural network architecture and adversarial training for multi-step video prediction.", "The neural network in charge of predicting the next frame in a video implicitly generates flow that is used to transform the previously observed frame into the next.", "Additionally, this paper proposes a new quantitative evaluation criteria based on the observed flow in the prediction in comparison to the groundtruth.", "Experiments are performed on a new robot arm dataset proposed in the paper where they outperform the used baselines.", "2) Pros:+ New quantitative evaluation criteria based on motion accuracy.", "+ New dataset for robot arm pushing objects.", "3) Cons:Overall architectural prediction network differences with baseline are unclear:", "The differences between the proposed prediction network and [1] seem very minimal.", "In Figure 3, it is mentioned that the network uses a U-Net with recurrent connections.", "This seems like a very minimal change in the overall architecture proposed.", "Additionally, there is a paragraph of “architecture improvements” which also are minimal changes.", "Based on the title of section 3, it seems that there is a novelty on the “prediction with flow” part of this method.", "If this is a fact, there is no equation describing how this flow is computed.", "However, if this “flow” is computed the same way [1] does it, then the title is misleading.", "Adversarial training objective alone is not new as claimed by the authors:", "The adversarial objective used in this paper is not new.", "Works such as [2,3] have used this objective function for single step and multi-step frame prediction training, respectively.", "If the authors refer to the objective being new in the sense of using it with an action conditioned video prediction network, then this is again an extremely minimal contribution.", "Essentially, the authors just took the previously used objective function and used it with a different network.", "If the authors feel otherwise, please comment on why this is the case.", "Incomplete experiments:The authors only show experiments on videos containing objects that have already been seen,", "but no experiments with objects never seen before.", "The missing experiment concerns me in the sense that the network could just be memorizing previously seen objects.", "Additionally, the authors present evaluation based on PSNR and SSIM on the overall predicted video, but not in a per-step paradigm.", "However, the authors show this per-step evaluation in the Amazon Mechanical Turk, and predicted object position evaluations.", "Unclear evaluation:The way the Amazon Mechanical Turk experiments are performed are unclear and/or not suited for the task at hand.", "Based on the explanation of how these experiments are performed, the authors show individual images to mechanical turkers.", "If we are evaluating the video prediction task for having real or fake looking videos, the turkers need to observe the full video and judge based on that.", "If we are just showing images, then they are evaluating image synthesis, which do not necessarily contain the desired properties in videos such as temporal coherence.", "Additional comments:The paper needs a considerable amount of polishing.", "4) Conclusion:This paper seems to contain very minimal changes in comparison to the baseline by [1].", "The adversarial objective is not novel as mentioned by the authors and has been used in [2,3].", "Evaluation is unclear and incomplete.", "References:[1] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.", "[2] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016.", "[3] Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee. Decomposing Motion and Content for Natural Video Sequence Prediction. In ICLR, 2017" ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "request", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "evaluation", "reference", "reference", "reference" ]
B104VQCgM
[ "The paper basically propose keep using the typical data-augmentation transformations done during training also in evaluation time, to prevent adversarial attacks.", "In the paper they analyze only 2 random resizing and random padding,", "but I suppose others like random contrast, random relighting, random colorization, ... could be applicable.", "\\n\\nSome of the pros of the proposed tricks is that it doesn't require re-training existing models,", "although as the authors pointed out re-training for adversarial images is necessary to obtain good results.", "\\n\\n\\nTypically images have different sizes", ", however in the Dataset are described as having 299x299x3 size,", "are all the test images resized before hand?", "How would this method work with variable size images?", "\\n\\nThe proposed defense requires increasing the size of the input images,", "have you analyzed the impact in performance?", "Also it would be good to know how robust is the method for smaller sizes.", "\\n\\nSection 4.6.2 seems to indicate that 1 pixel padding or just resizing 1 pixel is enough to get most of the benefit,", "please provide an analysis of how results improve as the padding or size increase.", "\\n\\nIn section 5 for the challenge authors used a lot more evaluations per image,", "could you provide how much extra computation is needed for that model?" ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "non-arg", "fact", "non-arg", "request", "evaluation", "request", "evaluation", "request" ]
B1y7_3YgM
[ "This submission proposes a new seq2sel solution by adopting two new techniques, a sequence-to-set model and column attention mechanism. ", "They show performance improve over existing studies on WikiSQL dataset.", "While the paper is written clearly, ", "the contributions of the work heavily depends on the WikiSQL dataset. ", "It is not sure if the approach is generally applicable to other sequence-to-sql workloads. ", "Detailed comments are listed below: 1. WikiSQL dataset contains only a small class of SQL queries, with aggregation over single table and various filtering conditions. ", "It does not involve any complex operator in relational database system, e.g., join and groupby. ", "Due to its simple structure, the problem of sequence-to-sql translation over WikiSQL is actually simplified as a parameter selection problem for a fixed template. ", "This greatly limits the generalization of approaches only applicable to WikiSQL. ", "The authors are encouraged to explore other datasets available in the literature.", "2. The \"order-matters\" motivation is not very convincing. ", "It is straightforward to employ a global ordering approach to rank the columns and filtering conditions based on certain rules, e.g., alphabetical order. ", "That could ensure the orders in the SQL results are always consistent.", "3. The experiments do not fully verify how the approaches bring performance improvements. ", "In the current version, the authors only report superficial accuracy results on final outcomes, without any deep investigation into why and how their approach works. ", "For instance, they could verify how much accuracy improvement is due to the insensitivity to order in filtering expressions.", "4. They do not compare against state-of-the-art solution on column and expression selection. ", "While their attention mechanism over the columns could bring performance improvement, ", "they should have included experiments over existing solutions designed for similar purpose. ", "In (Yin, et al., IJCAI 2016), for example, representations over the columns are learned to generate better column selection.", "As a conclusion, I find the submission contains certain interesting ideas but lacks serious research investigations. ", "The quality of the paper could be much enhanced, if the authors deepen their studies on this direction." ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "fact", "request", "fact", "evaluation", "evaluation" ]
HkshYX9xz
[ "In this work, discrete-weight NNs are trained using the variational Bayesian framework, achieving similar results to other state-of-the-art models.", "Weights use 3 bits on the first layer and are ternary on the remaining layers.", "- Pros: The paper is well-written and connections with the literature properly established.", "The approach to training discrete-weights NNs, which is variational inference, is more principled than previous works (but see below).", "- Cons: The authors depart from the original motivation when the central limit theorem is invoked.", "Once we approximate the activations with Gaussians, do we have any guarantee that the new approximate lower bound is actually a lower bound?", "This is not discussed.", "If it is not a lower bound, what is the rationale behind maximizing it?", "This seems to place this work very close to previous works, and not in the \"more principled\" regime the authors claim to seek.", "The likelihood weighting seems hacky.", "The authors claim \"there are usually many more NN weights than there are data samples\".", "If that is the case, then it seems that the prior dominating is indeed the desired outcome.", "A different, more flat prior (or parameter sharing), can be used,", "but the described reweighting seems to be actually breaking a good property of Bayesian inference,", "which is defecting to the prior when evidence is lacking.", "In terms of performance (Table 1), the proposed method seems to be on par with existing ones.", "It is unclear then what the advantage of this proposal is.", "Sparsity figures are provided for the current approach,", "but those are not contrasted with existing approaches.", "Speedup is claimed with respect to an NN with real weights, but not with respect existing NNs with binary weights,", "which is the appropriate baseline.", "- Minor comments: Page 3: Subscript t and variable t is used for the targets,", "but I can't find where it is defined.", "Only the names of the datasets used in the experiments are given,", "but they are not described, or even better, shown in pictures (maybe in a supplementary).", "The title of the paper says \"discrete-valued NNs\".", "The weights are discrete, but the activations and outputs are continuous,", "so I find it confusing.", "As a contrast, I would be less surprised to hear a sigmoid belief network called a \"discrete-valued NN\", even though its weights are continuous." ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "non-arg", "fact", "fact", "fact", "fact", "evaluation", "evaluation" ]
BJFxOpcez
[ "This paper explores learning dynamic filters for CNNs. ", "The filters are generated by using the features of an autoencoder on the input image, and linearly combining a set of base filters for each layer.", " This addresses an interesting problem which has been looked at a lot before, but with some small new parts.", " There is a lot of prior work in this area ", "that should be cited in the area of dynamic filters and steerable filters. ", "There are also parallels to ladder networks that should be highlighted. ", "The results indicate improvement over baselines, ", "however baselines are not strong baselines. ", "A key question is what happens when this method is combined with VGG11 which the authors train as a baseline? ", "What is the effect of the reconstruction loss? ", "Can it be removed? ", "There should be some ablation study here.", "Figure 5 is unclear what is being displayed, ", "there are no labels.", "Overall I would advise the authors to address these questions and suggest this as a paper suitable for a workshop submission." ]
[ "fact", "fact", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "request", "non-arg", "non-arg", "request", "evaluation", "fact", "request" ]
ry-q9ZOlf
[ "This paper proposes a simple modification to the standard alternating stochastic gradient method for GAN training, which stabilizes training, by adding a prediction step.", "This is a clever and useful idea, ", "and the paper is very well written. ", "The proposed method is very clearly motivated, both intuitively and mathematically, ", "and the authors also provide theoretical guarantees on its convergence behavior. ", "I particularly liked the analogy with the damped harmonic oscillator.", "The experiments are well designed and provide clear evidence in favor of the usefulness of the proposed technique. ", "I believe that the method proposed in this paper will have a significant impact in the area of GAN training.", "I have only one minor question: in the prediction step, why not use a step size, say $\\bar{u}_k+1 = u_{k+1} + \\gamma_k (u_{k+1} − u_k)$, such that the \"amount of predition\" may be adjusted?" ]
[ "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "non-arg" ]
S1QsSa1-M
[ "Authors provide an interesting loss function approach for clustering using a deep neural network. ", "They optimize Kuiper-based nonparametric loss and apply the approach on a large social network data-set. ", "However, the details of the deep learning approach are not well described. ", "Some specific comments are given below.", "1.Further details on use of 10-fold cross validation need to be discussed including over-fitting aspect.", "2. Details on deep learning, number of hidden layers, number of hidden units, activation functions, weight adjustment details on each learning methods should be included.", "3. Conclusion section is very brief ", "and can be expanded by including a discussion on results comparison and over fitting aspects in cross validation. ", "Use of Kuiper-based nonparametric loss should also be justified as there are other loss functions can be used under these settings." ]
[ "fact", "fact", "evaluation", "non-arg", "request", "request", "evaluation", "request", "request" ]
BkuT3b9ef
[ "This paper investigates meta-learning strategy for automated architecture search in the context of RNN. ", "To constraint the architecture search space, authors propose a DSL that specifies the RNN recurrent operations. ", "This DSL allows to explore RNN architectures using either random search or a reinforcement-learning strategy. ", "Candidate architectures are ranked using a TreeLSTM that tries to predict the architecture performances. ", "The top-k architectures are then evaluated by fully training them on a given task.", "Authors evaluate their approach on PTB/Wikitext 2 language modeling and Multi30k/IWSLT'16 machine translation. ", "In both experiments, authors show that their approach obtains competitive results and can sometime outperforms RNN cells such as GRU/LSTM. ", "In the PTB experiment, their architecture however underperforms other LSTM variant in the literatures.", "- Quality/Clarity The paper is overall well written and pleasant to read.", "Few details can be clarified. ", "In particular how did you initialize the weight and bias for both the LSTM/GRU baselines and the found architectures? ", "Is there other works leveraging RNN that report results on the Multi30k/IWSLT datasets?", "You state in paragraph 3.2 that human experts can inject the previous best known architecture when training the ranking networks. ", "Did you use this in the experiments? ", "If yes, what was the impact of this online learning strategy on the final results? ", "- Originality The idea of using DSL + ranking for architecture search seems novel.", "- Significance Automated architecture search is a promising way to design new networks. ", "However, it is not clear why the proposed approach is not able to outperforms other LSTM-based architectures on the PTB task. ", "Could the problem arise from the DSL that constraint too much the search space ? ", "It would be nice to have other tasks that are commonly used as benchmark for RNN to see where this approach stand.", "In addition, authors propose both a DSL, a random and RL generator and a ranking function. ", "It would be nice to disentangle the contributions of the different components. ", "In particular, did the authors compare the random search vs the RL based generator or the performances of the RL-based generator when the ranking network is not used?", "Although authors do show that they outperform NAScell in one setting, ", "it would be nice to have an extended evaluation (using character level PTB for instance)." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "request", "fact", "request" ]
HJI6Rf1eG
[ "This writeup describes an application of recurrent autoencoder to analysis of multidimensional time series. ", "The quality of writing, experimentation and scholarship is clearly below than what is expected from a scientific article. ", "The method is explained in a very unclear way, ", "there is no mention of any related work. ", "I would encourage the authors to take a look at other ICLR submissions and see how rigorously written they are, how they position the reported research among comparable works." ]
[ "fact", "evaluation", "evaluation", "fact", "request" ]
H1caL6tXz
[ "The paper proposes to use a regularizer for tensor completion problems that can be written in a similar fashion as the variational factorization formulation of the trace norm aka nuclear norm for matrices.", "The paper introduces the regularizer", "with the nice argument that the gradient of the L3 norm to the power of 3rd will be easy to compute,", "but if we were raising the L2 norm to the 3rd power it would not be the case.", "They mention that their argument can generalize from D=3 to higher order tensors.", "Authors mention the paper by Friedland and Lim that introduces this norm and provides first theoretical results on it.", "Authors develop on the tensor equivalent of the matrix max norm", "which is built with the motivation of bringing robustness to heavy nodes in the graph (very popular content).", "This is again straightforward on the technical side.", "Empirical results are fine but do not show huge improvements compared to baselines", "so I do not think this is a strong argument for accepting the paper.", "On the scalability, authors do not show that their approach is better suited than baselines." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact" ]
BJu6VdYlf
[ "The paper suggests an importance sampling based Coreset construction for Support Vector Machines (SVM). ", "To understand the results, we need to understand Coreset and importance sampling: ", "Coreset: In the context of SVMs, a Coreset is a (weighted) subset of given dataset such that for any linear separator, the cost of the separator with respect to the given dataset X is approximately (there is an error parameter \\eps) the same as the cost with respect to the weighted subset. ", "The main idea is that if one can find a small coreset, then finding the optimal separator (maximum margin etc.) over the coreset might be sufficient. ", "Since the computation is done over a small subset of points, one hopes to gain in terms of the running time.", "Importance sampling: This is based on the theory developed in Feldman and Langberg, 2011 (and some of the previous works such as Langberg and Schulman 2010, the reference of which is missing). ", "The idea is to define a quantity called sensitivity of a data-point that captures how important this datapoint is with respect to contributing to the cost function. ", "Then a subset of datapoint are sampled based on the sensitivity and the sampled data point is given weight proportional to inverse of the sampling probability. ", "As per the theory developed in these past works, sampling a subset of size proportional to the sum of sensitivities gives a coreset for the given problem.", "So, the main contribution of the paper is to do all the sensitivity calculations with respect to SVM problem and then use the importance sampling theory to obtain bounds on the coreset size. ", "One interesting point of this construction is that Coreset construction involves solving the SVM problem on the given dataset which may seem like beating the purpose. ", "However, the authors note that one only needs to compute the Coreset of small batches of the given dataset and then use standard procedures (available in streaming literature) to combine the Coresets into a single Coreset. ", "This should give significant running time benefits. ", "The paper also compares the results against the simple procedure where a small uniform sample from the dataset is used for computation. ", "Evaluation: Significance: Coresets give significant running time benefits when working with very big datasets. ", "Coreset construction in the context of SVMs is a relevant problem and should be considered significant.", "Clarity: The paper is reasonably well-written. ", "The problem has been well motivated and all the relevant issues point out for the reader. ", "The theoretical results are clearly stated as lemmas a theorems that one can follow without looking at proofs. ", "Originality: The paper uses previously developed theory of importance sampling. ", "However, the sensitivity calculations in the SVM context is new as per my knowledge. ", "It is nice to know the bounds given in the paper and to understand the theoretical conditions under which we can obtain running time benefits using corsets. ", "Quality: The paper gives nice theoretical bounds in the context of SVMs. ", "One aspect in which the paper is lacking is the empirical analysis. ", "The paper compares the Coreset construction with simple uniform sampling. ", "Since Coreset construction is being sold as a fast alternative to previous methods for training SVMs, ", "it would have been nice to see the running time and cost comparison with other training methods that have been discussed in section 2." ]
[ "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request" ]
By2zFR_gz
[ "Quality: Although the research problem is an interesting direction ", "the quality of the work is not of a high standard. ", "My main conservation is that the idea of perturbation in semantic latent space has not been described in an explicit way. ", "How different it will be compared to a perturbation in an input space? ", "Clarity: The use of the term \"adversarial\" is not quite clear in the context ", "as in many of those example classification problems the perturbation completely changes the class label (e.g. from \"church\" to \"tower\" or vice-versa)", "Originality: The generation of adversarial examples in black-box classifiers has been looked in GAN literature as well and gradient based perturbations are studied too. ", "What is the main benefit of the proposed mechanism compared to the existing ones?", "Significance: The research problem is indeed a significant one ", "as it is very important to understand the robustness of the modern machine learning methods by exposing them to adversarial scenarios where they might fail.", "pros: (a) An interesting problem to evaluate the robustness of black-box classifier systems", "(b) generating adversarial examples for image classification as well as text analysis.", "(c) exploiting the recent developments in GAN literature to build the framework forge generating adversarial examples.", "cons:(a) The proposed search algorithm in the semantic latent space could be computationally intensive. ", "any remedy for this problem?", "(b) Searching in the latent space z could be strongly dependent on the matching inverter $I_\\gamma(.)$. ", "any comment on this?", "(c) The application of the search algorithm in case of imbalanced classes could be something that require further investigation." ]
[ "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "fact", "fact", "non-arg", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "fact", "non-arg", "request" ]
BkuDb6tgf
[ "This work attempts to improve the global consistency of samples generated by generative adversarial networks by replacing the discriminator with an autoregressive model in an encoded feature space. ", "The log likelihood of the classification model is then replaced with the log likelihood of the feature space autoregressive model. ", "It's not clear what can be said with respect to the convergence properties of this class of models, ", "and this is not discussed.", "The method is quite similar in spirit to Denoising Feature Matching of Warde-Farley & Bengio (2017), ", "as both estimate a density model in feature space -- this method via a constrained autoregressive model and DFM via an estimator of the score function, ", "although DFM was used in conjunction with the standard criterion whereas this method replaces it. ", "This is certainly worth mentioning and discussing. ", "In particular the section in Warde-Farley & Bengio regarding the feature space transformation of the data density seems quite relevant in this work.", "Unfortunately the only quantitative measurements reporter are Inception scores, ", "which is known to be a poor measure ", "(and the scores presented are not particularly high, either); ", "Frechet Inception distance or log likelihood estimates via AIS on some dataset would be more convincing. ", "On the plus side, the authors report an average over Inception scores for multiple runs. ", "On the other hand, it sounds as though the stopping criterion was still qualitative." ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "evaluation" ]
B1B3e0Oef
[ "This work introduces a particular parametrization of a stochastic policy (a uniform mixture of deterministic policies).", "They find this parametrization, when trained with stochastic value gradient outperforms DDPG on several OpenAI gym benchmarks.", "This paper unfortunately misses many significant pieces of prior work training stochastic policies.", "The most relevant is [1] which should definitely be cited.", "The algorithm here can be seen as SVG(0) with a particular parametrization of the policy.", "However, numerous other works have examined stochastic policies including [2] (A3C which also used the Torcs environment) and [3].", "The wide use of stochastic policies in prior work makes the introductory explanation of the potential benefits for stochastic policies distracting,", "instead the focus should be on the particular choice and benefits of the particular stochastic parametrization chosen here and the choice of stochastic value gradient as a training method (as opposed to many on-policy methods).", "The empirical comparison is also hampered by only comparing with DDPG,", "there are numerous stochastic policy algorithms that have been compared on these environments.", "Additionally, the DDPG performance here is lower for several environments than the results reported in Henderson et al. 2017 (cited in the paper, table 2 here, table 3 Henderson)", "which should be explained.", "While this particular parametrization may provide some benefits, the lack of engagement with relevant prior work and other stochastic baselines significant limits the impact of this work and makes assessing its significance difficult.", "This work would benefit from careful copyediting.", "[1] Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., & Tassa, Y. (2015). Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (pp. 2944-2952).", "[2] Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., ... & Kavukcuoglu, K. (2016, June). Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (pp. 1928-1937).", "[3] Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438." ]
[ "fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "request", "evaluation", "fact", "fact", "request", "evaluation", "request", "reference", "reference", "reference" ]
SJJrhg5lf
[ "## Review Summary Overall, the paper's paper core claim, that increasing batch sizes at a linear rate during training is as effective as decaying learning rates, isinteresting ", "but doesn't seem to be too surprising given other recent work in this space. ", "The most useful part of the paper is the empirical evidence to backup this claim, ", "which I can't easily find in previous literature. ", "I wish the paper had explored a wider variety of dataset tasks and models to better show how well this claim generalizes, better situated the practical benefits of the approach ", "(how much wallclock time is actually saved? ", "how well can it be integrated into a distributed workflow?), ", "and included some comparisons with other recent recommended ways to increase batch size over time.", "## Pros / Strengths + effort to assess momentum / Adam / other modern methods", "+ effort to compare to previous experimental setups", "## Cons / Limitations - lack of wallclock measurements in experiments", "- only ~2 models / datasets examined, ", "so difficult to assess generalization", "- lack of discussion about distributed/asynchronous SGD", "## Significance Many recent previous efforts have looked at the importance of batch sizes during training, ", "so topic is relevant to the community. ", "Smith and Le (2017) present a differential equation model for the scale of gradients in SGD,finding a linear scaling rule proportional to eps N/B, where eps = learning rate, N = training set size, and B = batch size. ", "Goyal et al (2017) show how to train deep models on ImageNet effectively with large (but fixed) batch sizes by using a linear scaling rule.", "A few recent works have directly tested increasing batch sizes during training. ", "De et al (AISTATS 2017) have a method for gradually increasing batch sizes, as do Friedlander and Schmidt (2012). ", "Thus, it is already reasonable to practitioners that the proposed linear scaling of batch sizes during training would be effective.", "While increasing batch size at the proposed linear scale is simple and seems to be effective, ", "a careful reader will be curious how much more could be gained from the backtracking line search method proposed in De et al.", "## Quality Overall, only single training runs from a random initialization are used. ", "It would be better to take the best of many runs or to somehow show error bars,", "to avoid the reader wondering whether gains are due to changes in algorithm or to poor exploration due to bad initialization. ", "This happens a lot in Sec. 5.2.", "Some of the experimental setting seem a bit haphazard and not very systematic.", "In Sec. 5.2, only two learning rate scales are tested (0.1 and 0.5). ", "Why not examine a more thorough range of values?", "Why not report actual wallclock times? ", "Of course having reduced number of parameter updates is useful, ", "but it's difficult to tell how big of a win this could be.", "What about distributed SGD or asyncronous SGD (hogwild)? ", "Small batch sizes sometimes make it easier for many machines to be working simultaneously. ", "If we scale up to batch sizes of ~ N/10, we can only get 10x speedups in parallelization (in terms of number of parameter updates). ", "I think there is some subtle but important discussion needed on how this framework fits into modern distributed systems for SGD.", "## Clarity Overall the paper reads reasonably well.", "Offering a related work \"feature matrix\" that helps readers keep track of how previous efforts scale learning rates or minibatch sizes for specific experiments could be valueable. ", "Right now, lots of this information is just provided in text, ", "so it's not easy to make head-to-head comparisons.", "Several figure captions should be updated to clarify which model and dataset are studied. ", "For example, when skimming Fig. 3's caption there is no such information.", "## Paper Summary The paper examines the influence of batch size on the behavior of stochastic gradient descent to minimize cost functions. ", "The central thesis is that instead of the \"conventional wisdom\" to fix the batch size during training and decay the learning rate, it is equally effective (in terms of training/test error reached) to gradually increase batch size during training while fixing the learning rate. ", "These two strategies are thus \"equivalent\". ", "Furthermore, using larger batches means fewer parameter updates per epoch, ", "so training is potentially much faster.", "Section 2 motivates the suggested linear scaling using previous SGD analysis from Smith and Le (2017). ", "Section 3 makes connections to previous work on finding optimal batch sizes to close the generaization gap. ", "Section 4 extends analysis to include SGD methods with momentum.", "In Section 5.1, experiments training a 16-4 ResNet on CIFAR-10 compare three possible SGD schedules: ", "* increasing batch size * decaying learning rate * hybrid (increasing batch size and decaying learning rate) ", "Fig. 2, 3 and 4 show that across a range of SGD variants (+/- momentum, etc) these three schedules have similar error vs. epoch curves. ", "This is the core claimed contribution: empirical evidence that these strategies are \"equivalent\".", "In Section 5.3, experiments look at Inception-ResNet-V2 on ImageNet, ", "showing the proposed approach can reach comparable accuracies to previous work at even fewer parameter updates (2500 here, vs. ∼14000 for Goyal et al 2007)" ]
[ "evaluation", "evaluation", "evaluation", "non-arg", "request", "request", "request", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact" ]
rkZd9y9xz
[ "The paper proposes a novel way of compressing gradient updates for distributed SGD, in order to speed up overall execution.", "While the technique is novel as far as I know (eq. (1) in particular),", "many details in the paper are poorly explained (I am unable to understand)", "and experimental results do not demonstrate that the problem targeted is actually alleviated.", "More detailed remarks: 1: Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins...", "4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B.", "4.2: This section is not fully comprehensible to me.", "- It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper).", "What is \"maximum value of gradients in a matrix\"?", "Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc.", "- Rounding: do you use deterministic or random rounding?", "Do you then again store the inaccuracy?", "- I don't understand definition of d.", "It seems you subtract logarithm of a gradient from a scalar.", "- In total, I really don't know what is the object that actually gets communicated,", "and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it.", "This section has to be thoroughly explained, perhaps with some illustrative examples.", "4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers?", "4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so.", "Possible because the problems above.", "5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance.", "6: Given that I don't understand how you compress the information being communicated, it is hard to believe the utility of the method.", "The objective was to speed up training time because communication is bottleneck.", "If you provide 12,000x compression, is it any more practically useful than providing 120x compression?", "What would be the difference in runtime?", "Such questions are never discussed.", "Further, if in the implementation you discuss masking mantissa,", "I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code.", "I think the soundness of work addressing this particular problem is damaged if not implemented properly (compared to other kinds of works in current ML related research).", "Therefore I highly recommend including proper time comparison with a baseline in the future.", "Further, I don't understand 2 things about the Tables.", "a) how do you combine the proposed method with Momentum in SGD?", "This is not discussed as far as I can see.", "b) What is \"QSGD, 2bit\"", "If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters." ]
[ "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "fact", "request", "evaluation" ]
ryD53e9xG
[ "This work addresses the scenario of fine-tuning a pre-trained network for new data/tasks and empirically studies various regularization techniques.", "Overall, the evaluation concludes with recommending that all layers of a network whose weights are directly transferred during fine-tuning should be regularized against the initial net with an L2 penalty during further training.", "Relationship to prior work: Regularizing a target model against a source model is not a new idea.", "The authors miss key connections to A-SVM [1] and PMT-SVM [2] -- two proposed transfer learning models applied to SVM weights, but otherwise very much the same as the proposed solution in this paper.", "Though the study here may offer new insights for deep nets,", "it is critical to mention prior work which also does analysis of these regularization techniques.", "Significance: As the majority of visual recognition problems are currently solved using variants of fine-tuning,", "if the findings reported in this paper generalize, then it could present a simple new regularization which improves the training of new models.", "The change is both conceptually simple and easy to implement so could be quickly integrated by many people.", "Clarity and Questions: The purpose of the paper is clear,", "however, some questions remain unanswered.", "1) How is the regularization weight of 0.01 chosen?", "This is likely a critical parameter.", "In an experimental paper, I would expect to see a plot of performance for at least one experiment as this regularization weighting parameter is varied.", "2) How does the use of L2 regularization on the last layer effect the regularization choice of other layers?", "What happens if you use no regularization on the last layer?", "L1 regularization?", "3) Figure 1 is difficult to read.", "Please at least label the test sets on each sub-graph.", "4) There seems to be some issue with the freezing experiment in Figure 2.", "Why does performance of L2 regularization improve as you freeze more and more layers, but is outperformed by un-freezing all.", "5) Figure 3 and the discussion of linear dependence with the original model in general seems does not add much to the paper.", "It is clear that regularizing against the source model weights instead of 0 should result in final weights that are more similar to the initial source weights.", "I would rather the authors use this space to provide a deeper analysis of why this property should help performance.", "6) Initializing with a source model offers a strong starting point so full from scratch learning isn’t necessary -- meaning fewer examples are needed for the continued learning (fine-tuning) phase.", "In a similar line of reasoning, does regularizing against the source further reduce the number of labeled points needed for fine-tuning?", "Can you recover L2 fine-tuning performance with fewer examples when you use L2-SP?", "[1] J. Yang, R. Yan, and A. Hauptmann. Adapting svm classifiers to data with shifted distributions. In ICDM Workshops, 2007.", "[2] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In Proc. ICCV, 2011." ]
[ "fact", "fact", "fact", "fact", "fact", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "request", "request", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "request", "request", "reference", "reference" ]
H1JAev9gz
[ "The authors present 3 architectures for learning representations of programs from execution traces. ", "In the variable trace embedding, the input to the model is given by a sequence of variable values. ", "The state trace embedding combines embeddings for variable traces using a second recurrent encoder. ", "The dependency enforcement embedding performs element-wise multiplication of embeddings for parent variables to compute the input of the GRU to compute the new hidden state of a variable. ", "The authors evaluate their architectures on the task of predicting error patterns for programming assignments from Microsoft DEV204.1X (an introduction to C# offered on edx) and problems on the Microsoft CodeHunt platform. ", "They additionally use their embeddings to decrease the search time for the Sarfgen program repair system.", "This is a fairly strong paper. ", "The proposed models make sense ", "and the writing is for the most part clear, ", "though there are a few places where ambiguity arises:", "- The variable \"Evidence\" in equation (4) is never defined. ", "- The authors refer to \"predicting the error patterns\", ", "but again don't define what an error pattern is. ", "The appendix seems to suggest that the authors are simply performing multilabel classification based on a predefined set of classes of errors, ", "is this correct? ", "- It is not immediately clear from Figures 3 and 4 that the architectures employed are in fact recurrent.", "- Figure 5 seems to suggest that dependencies are only enforced at points in a program where assignment is performed for a variable, ", "is this correct?", "Assuming that the authors can address these clarity issues, I would in principle be happy for the paper to appear." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "non-arg", "evaluation" ]
BkH22ZFxG
[ "This paper proposes two regularization terms to encourage learning disentangled representations. ", "One term is applied to weight parameters of a layer just like weight decay. ", "The other is applied to the activations of the target layer (e.g., the penultimate layer). ", "The core part of both regularization terms is a compound hinge loss of which the input is the KL divergence between two softmax-normalized input arguments. ", "Experiments demonstrate the proposed regularization terms are helpful in learning representations which significantly facilitate clustering performance.", "Pros: (1) This paper is clearly written and easy to follow.", "(2) Authors proposed multiple variants of the regularization term which cover both supervised and unsupervised settings.", "(3) Authors did a variety of classification experiments ranging from time serials, image and text data.", "Cons: (1) The design choice of the compound hinge loss is a bit arbitrary. ", "KL divergence is a natural similarity measure for probability distribution. ", "However, it seems that authors use softmax to force the weights or the activations of neural networks to be probability distributions just for the purpose of using KL divergence. ", "Have you compared with other choices of similarity measure, e.g., cosine similarity? ", "I think the comparison as an additional experiment would help explain the design choice of the proposed function.", "(2) In the binary classification experiments, it is very strange to almost randomly group several different classes of images into the same category. ", "I would suggest authors look into datasets where the class hierarchy is already provided, e.g., ImageNet or a combination of several fine-grained image classification datasets.", "Additionally, I have the following questions: (1) I am curious how the proposed method compares to other competitors in terms of the original classification setting, e.g., 10-class classification accuracy on CIFAR10. ", "(2) What will happen for the multi-layer loss if the network architecture is very large such that you can not use large batch size, e.g., less than 10? ", "(3) In drawing figure 2 and 3, if the nonlinear activation function is not ReLU, how would you exam the same behavior? ", "Have you tried multi-class classification for the case “without proposed loss component” and does the similar pattern still happen or not?", "Some typos: (1) In introduction, “when the cosine between the vectors 1” should be “when the cosine between the vectors is 1”.", "(2) In section 4.3, “we used the DBPedia ontology dataset dataset” should be “we used the DBPedia ontology dataset”. ", "I would like to hear authors’ feedback on the issues I raised." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "non-arg" ]
rksMwz9xG
[ "This paper presents a new reinforcement learning architecture called Reactor by combining various improvements in deep reinforcement learning algorithms and architectures into a single model.", "The main contributions of the paper are to achieve a better bias-variance trade-off in policy gradient updates, multi-step off-policy updates withdistributional RL, and prioritized experience replay for transition sequences.", "The different modules are integrated well and the empirical results are very promising.", "The experiments (though limited to Atari) are well carried out and the evaluation is performed on both sample efficiency and training time.", "Pros: 1. Nice integration of several recent improvements in deep RL, along with a few novel tricks to improve training.", "2. The empirical results on 57 Atari games are impressive, in terms of final scores as well as real-time training speed.", "Cons: 1. Reactor is still less sample-efficient than Rainbow, with significantly lower scores after 200M frames.", "While the reactor trains much faster, it does use more parallel compute,", "so the comparison with Rainbow on wall clock time is not entirely fair.", "Would a distributed version of Rainbow perform better in this respect?", "2. Empirical comparisons are restricted to the Atari domain.", "The conclusions of the paper will be much stronger if results are also shown on other environments like Mujoco/Vizdoom/Deepmind Lab.", "3. Since the paper introduces a few new ideas like prioritized sequence replay,", "it would help if a more detailed analysis was performed on the impact of these individual schemes, even if in a model simpler than the Reactor.", "For instance, one could investigate the impact of prioritized sequence replay in models like multi-step DQN or recurrent DQN.", "This will help us understand the impact of each of these ideas in a more comprehensive fashion." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "fact", "request", "fact", "request", "request", "evaluation" ]
B1qbgHcxz
[ "Summary:This paper proposes a simple recipe to preserve proximity to zero mean for activations in deep neural networks.", "The proposal is to replace the non-linearity in half of the units in each layer with its \"bipolar\" version --", "one that is obtained by flipping the function on both axes.", "The technique is tested on deep stacks of recurrent layers, and on convolutional networks with depth of 28, showing that improved results over the baseline networks are obtained.", "Clarity: The paper is easy to read.", "The plots in Fig. 2 and the appendix are quite helpful in improving presentation.", "The experimental setups are explained in detail.", "Quality and significance: The main idea from this paper is simple and intuitive.", "However, the experiments to support the idea do not seem to match the motivation of the paper.", "As stated in the beginning of the paper, the motivation behind having close to zero mean activations is that this is expected to speed up training using gradient descent.", "However, the presented results focus on the performance on held-out data instead of improvements in training speed.", "This is especially the case for the RNN experiments.", "For the CIFAR-10 experiment, the training loss curves do show faster initial progress in learning.", "However, it is unclear that overall training time can be reduced with the help of this technique.", "To evaluate this speed up effect, the dependence on the choice of learning rate and other hyperparameters should also be considered.", "Nevertheless, it is interesting to note the result that the proposed approach converts a deep network that does not train into one which does in many cases.", "The method appears to improve the training for moderately deep convolutional networks without batch normalization", "(although this is tested on a single dataset),", "but is not practically useful yet", "since the regularization benefits of Batch Normalization are also taken away." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "fact" ]
SkaNEl9xM
[ "Summary of paper and review: The paper presents the instability issue of training GANs for semi-supervised learning. ", "Then, they propose to essentially utilize a wgan for semi-supervised learning. ", "The novelty of the paper is minor, ", "since similar approaches have been done before. ", "The analysis is poor, ", "the text seems to contain mistakes, ", "and the results don't seem to indicate any advantage or promise of the proposed algorithm.", "Detailed comments: - Unless I'm grossly mistaken the loss function (2) is clearly wrong. ", "There is a cross-entropy term used by Salimans et al. clearly missing.", "- As well, if equation (4) is referring to feature matching, the expectation should be inside the norm and not outside ", "(this amounts to matching random specific random fake examples to specific random real examples, an imbalanced form of MMD).", "- Theorem 2.1 is an almost literal rewrite of Theorem 2.4 of [1], without proper attribution. ", "Furthermore, Theorem 2.1 is not sufficient to demonstrate existence of this issues. ", "This is why [1] provides an extensive batch of targeted experiments to verify this assumptions. ", "Analogous experiments are clearly missing. ", "A detailed analysis of these assumptions and its implications are missing.", "- In section 3, the authors propose a minor variation of the Improved GAN approach by using a wgan on the unsupervised part of the loss. ", "Remarkably similar algorithms (where the two discriminators are two separate heads) to this have been done before (see for example, [2], but other approaches exist after that, see for examples papers citing [2]).", "- Theorem 3.1 is a trivial consequence of Theorem 3 from WGAN.", "- The experiments leave much to be desired. ", "It is widely known that MNIST is a bad benchmark at this point, ", "and that no signal can be established from a minor success in this dataset. ", "Furthermore, the results in CIFAR don't seem to bring any advantage, considering the .1% difference in accuracy is 1/100 of chance in this dataset.", "[1]: Arjovsky & Bottou, Towards Principled Methods for Training Generative Adversarial Networks, ICLR 2017", "[2]: Mroueh & Sercu, Goel, McGan: Mean and Covariance Feature Matching GAN, ICML 2017" ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "reference", "reference" ]
r1Q8qCdgf
[ "The authors investigate different message passing schedules for GNN learning. ", "Their proposed approach is to partition the graph into disjoint subregions, pass many messages on the sub regions and pass fewer messages between regions (an approach that is already considered in related literature, e.g., the BP literature), with the goal of minimizing the number of messages that need to be passed to convey information between all pairs of nodes in the network. ", "Experimentally, the proposed approach seems to perform comparably to existing methods (or slightly worse on average in some settings). ", "The paper is well-written and easy to read. ", "My primary concern is with novelty. ", "Many similar ideas have been floating around in a variety of different message-passing communities. ", "With no theoretical reason to prefer the proposed approach, it seems like it may be of limited interest to the community if speed is its only benefit (see detailed comments below).", "Specific comments:1) \"When information from any one node has reached all other nodes in the graph for the first time, this problem is considered as solved.\"", "Perhaps it is my misunderstanding of the way in which GNNs work, but isn't the objective actually to reach a set of fixed point equations. ", "If so, then simply propagating information from one side of the graph may not be sufficient.", "2) The experimental results in Section 4.4 are almost impossible to interpret. ", "Perhaps it is better to plot number of edges updated versus accuracy? ", "This at least would put them on equal footing. ", "In addition, the experiments that use randomness should be repeated and plotted on average (just in case you happened to pick a bad schedule).", "3) More generally, why not consider random schedules (i.e., just pick a random edge, update, repeat) or random partitions? ", "I'm not certain that a fixed set will perform best independent of the types of updates being considered, and random schedules, like the fully synchronous case for an important baseline (especially if update speed is all you care about).", "Typos: -pg. 6, \"Thm. 2\" -> \"Table 2\"" ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "fact", "fact", "evaluation", "request", "fact", "request", "request", "evaluation", "request" ]
Hy0xrQegf
[ "This paper's main thesis is that automatic metrics like BLEU, ROUGE, or METEOR is suitable for task-oriented natural language generation (NLG). ", "In particular, the paper presents a counterargument to \"How NOT To Evaluate Your Dialogue System...\" ", "where Wei et al argue that automatic metrics are not correlated or only weakly correlated with human eval on dialogue generation. ", "The authors here show that the performance of various NN models as measured by automatic metrics like BLEU and METEOR is correlated with human eval.", "Overall, this paper presents a useful conclusion: use METEOR for evaluating task oriented NLG. ", "However, there isn't enough novel contribution in this paper to warrant a publication. ", "Many of the details unnecessary: ", "1) various LSTM model descriptions are unhelpful ", "given the base LSTM model does just as well on the presented tasks ", "2) Many embedding based eval methods are proposed ", "but no conclusions are drawn from any of these techniques." ]
[ "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact" ]
BJHwtGogM
[ "The paper proposes data augmentation as an alternative to commonly used regularisation techniques like weight decay and dropout, and shows for a few reference models / tasks that the same generalization performance can be achieved using only data augmentation.", "I think it's a great idea to investigate the effects of data augmentation more thoroughly.", "While it is a technique that is often used in literature,", "there hasn't really been any work that provides rigorous comparisons with alternative approaches and insights into its inner workings.", "Unfortunately I feel that this paper falls short of achieving this.", "Experiments are conducted on two fairly similar tasks (image classification on CIFAR-10 and CIFAR-100), with two different network architectures.", "This is a bit meager to be able to draw general conclusions about the properties of data augmentation.", "Given that this work tries to provide insight into an existing common practice,", "I think it is fair to expect a much stronger experimental section.", "In section 2.1.1 it is stated that this was a conscious choice because simplicity would lead to clearer conclusions,", "but I think the conclusions would be much more valuable if variety was the objective instead of simplicity, and if larger-scale tasks were also considered.", "Another concern is that the narrative of the paper pits augmentation against all other regularisation techniques, whereas more typically these will be used in conjunction.", "It is however very interesting that some of the results show that augmentation alone can sometimes be enough.", "I think extending the analysis to larger datasets such as ImageNet, as is suggested at the end of section 3, and probably also to different problems than image classification, is going to be essential to ensure that the conclusions drawn hold weight.", "Comments:- The distinction between \"explicit\" and \"implicit\" regularisation is never clearly enunciated.", "A bunch of examples are given for both,", "but I found it tricky to understand the difference from those.", "Initially I thought it reflected the intention behind the use of a given technique;", "i.e. weight decay is explicit because clearly regularisation is its primary purpose --", "whereas batch normalisation is implicit because its regularisation properties are actually a side effect.", "However, the paper then goes on to treat data augmentation as distinct from other explicit regularisation techniques,", "so I guess this is not the intended meaning.", "Please clarify this, as the terms crop up quite often throughout the paper.", "I suspect that the distinction is somewhat arbitrary and not that meaningful.", "- In the abstract, it is already implied that data augmentation is superior to certain other regularisation techniques because it doesn't actually reduce the capacity of the model.", "But this ignores the fact that some of the model's excess capacity will be used to model out-of-distribution data (w.r.t. the original training distribution) instead.", "Data augmentation always modifies the distribution of the training data.", "I don't think it makes sense to imply that this is always preferable over reducing model capacity explicitly.", "This claim is referred to a few times throughout the work.", "- It could be more clearly stated that the reason for the regularising effect of batch normalisation is the noise in the batch estimates for mean and variance.", "- Some parts of the introduction could be removed", "because they are obvious, at least to an ICLR audience (like \"the model would not be regularised if alpha (the regularisation parameter) equals 0\").", "- The experiments with smaller dataset sizes would be more interesting if smaller percentages were used.", "50% / 80% / 100% are all on the same order of magnitude", "and this setting is not very realistic.", "In practice, when a dataset is \"too small\" to be able to train a network that solves a problem reliably, it will generally be one or more orders of magnitude too small, not 2x too small.", "- The choices of hyperparameters for \"light\" and \"heavy\" motivation seem somewhat arbitrary and are not well motivated.", "Some parameters which are sampled uniformly at random should be probably be sampled log-uniformly instead,", "because they represent scale factors.", "It should also be noted that much more extreme augmentation strategies have been used for this particular task in literature, in combination with padding (for example by Graham).", "It would be interesting to include this setting in the experiments as well.", "- On page 7 it is stated that \"when combined with explicit regularization, the results are much worse than without it\",", "but these results are omitted from the table.", "This is unfortunate", "because it is a very interesting observation, that runs counter to the common practice of combining all these regularisation techniques together (e.g. L2 + dropout + data augmentation is a common combination).", "Delving deeper into this could make the paper a lot stronger.", "- It is not entirely true that augmentation parameters depend only on the training data and not the architecture (last paragraph of section 2.4).", "Clearly more elaborate architectures benefit more from data augmentation, and might need heavier augmentation to perform optimally", "because they are more prone to overfitting", "(this is in fact stated earlier on in the paper as well).", "It is of course true that these hyperparameters tend to be much more robust to architecture changes than those of other regularisation techniques such as dropout and weight decay.", "This increased robustness is definitely useful", "and I think this is also adequately demonstrated in the experiments.", "- Phrases like \"implicit regularization operates more effectively at capturing reality\" are too vague to be meaningful.", "- Note that weight decay has also been found to have side effects related to optimization", "(e.g. in \"Imagenet classification with deep convolutional neural networks\", Krizhevsky et al.)" ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "request", "request", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "request", "fact", "fact", "request", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "reference" ]
BkYfM_Rgz
[ "It is clear that the problem studied in this paper is interesting. ", "However, after reading through the manuscript, it is not clear to me what are the real contributions made in this paper.", " I also failed to find any rigorous results on generalization bounds. ", "In this case, I cannot recommend the acceptance of this paper." ]
[ "evaluation", "evaluation", "evaluation", "evaluation" ]
SkYNcg5xz
[ "The paper addresses the problem of learners forgetting rare states and revisiting catastrophic danger states. ", "The authors propose to train a predictive ‘fear model’ that penalizes states that lead to catastrophes. ", "The proposed technique is validated both empirically and theoretically. ", "Experiments show a clear advantage during learning when compared with a vanilla DQN. ", "Nonetheless, there are some criticisms than can be made of both the method and the evaluations:", "The fear radius threshold k_r seems to add yet another hyperparameter that needs tuning. ", "Judging from the description of the experiments this parameter is important to the performance of the method and needs to be set experimentally. ", "There seems to be no way of a priori determine a good distance ", "as there is no way to know in advance when a catastrophe becomes unavoidable. ", "No empirical results on the effect of the parameter are given.", "The experimental results support the claim that this technique helps to avoid catastrophic states during initial learning.", "The paper however, also claims to address the longer term problem of revisiting these states once the learner forgets about them, ", "since they are no longer part of the data generated by (close to) optimal policies. ", "This problem does not seem to be really solved by this method. ", "Danger and safe state replay memories are kept, but are only used to train the catastrophe classifier. ", "While the catastrophe classifier can be seen as an additional external memory, ", "it seems that the learner will still drift away from the optimal policy and then need to be reminded by the classifier through penalties. ", "As such the method wouldn’t prevent catastrophic forgetting, ", "it would just prevent the worst consequences by penalizing the agent before it reaches a danger state. ", "It would therefore be interesting to see some long running experiments and analyse how often catastrophic states (or those close to them) are visited. ", "Overall, the current evaluations focus on performance and give little insight into the behaviour of the method. ", "The paper also does not compare to any other techniques that attempt to deal with catastrophic forgetting and/or the changing state distribution ([1,2]).", "In general the explanations in the paper often often use confusing and imprecise language, even in formal derivations, e.g. ‘if the fear model reaches arbitrarily high accuracy’ or ‘if the probability is negligible’.", "It is wasn’t clear to me that the properties described in Theorem 1 actually hold. ", "The motivation in the appendix is very informal and no clear derivation is provided. ", "The authors seem to indicate that a minimal return can be guaranteed because the optimal policy spends a maximum of epsilon amount of time in the catastrophic states and the alternative policy simply avoids these states. ", "However, as the alternative policy is learnt on a different reward, ", "it can have a very different state distribution, even for the non-catastrophics states. ", "It might attach all its weight to a very poor reward state in an effort to avoid the catastrophe penalty. ", "It is therefore not clear to me that any claims can be made about its performance without additional assumptions.", "It seems that one could construct a counterexample using a 3-state chain problem (no_reward,danger, goal) where the only way to get to the single goal state is to incur a small risk of visiting the danger state. ", "Any optimal policy would therefore need to spend some time e in the danger state, on average. ", "A policy that learns to avoid the danger state would then also be unable to reach the goal state and receive rewards. ", "E.g pi* has stationary distribution (0,e,1-e) and return 0*0+e*Rmin + (1-e)*Rmax. ", "By adding a sufficiently high penalty, policy pi~ can learn to avoid the catastrophic state with distribution (1,0,0) and then gets return 1*0+ 0*Rmin+0*Rmax= 0 < n*_M - e (Rmax - Rmin) = e*Rmin + (1-e)*Rmax - e (Rmax - Rmin). ", "This seems to contradict the theorem. ", "It wasn’t clear what assumptions the authors make to exclude situations like this.", "[1] T. de Bruin, J. Kober, K. Tuyls and R. Babuška, \"Improved deep reinforcement learning for robotics through distribution-based experience retention,\" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 3947-3952.", "[2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 201611835." ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "reference", "reference" ]
H1kAEtYlz
[ "The paper studies different methods for defining hypergraph embeddings, i.e. defining vectorial representations of the set of hyperedges of a given hypergraph. ", "It should be noted that the framework does not allow to compute a vectorial representation of a set of nodes not already given as an hyperedge. ", "A set of methods is presented : the first one is based on an auto-encoder technique ; ", "the second one is based on tensor decomposition ; ", "the third one derives from sentence embedding methods. ", "The fourth one extends over node embedding techniques ", "and the last one use spectral methods. ", "The two first methods use plainly the set structure of hyperedges. ", "Experimental results are provided on semi-supervised regression tasks. ", "They show very similar performance for all methods and variants. ", "Also run-times are compared ", "and the results are expected. ", "In conclusion, the paper gives an overview of methods for computing hypernode embeddings. ", "This is interesting in its own. ", "Nevertheless, as the target problem on hypergraphs is left unspecified, ", "it is difficult to infer conclusions from the study. ", "Therefore, I am not convinced that the paper should be published in ICLR'18.", "* typos * Recent surveys on graph embeddings have been published in 2017 and should be cited as \"A comprehensive survey of graph embedding ...\" by Cai et al", "* Preliminaries. The occurrence number R(g_i) are not modeled in the hypergraphs. ", "A graph N_a is defined but not used in the paper.", "* Section 3.1. the procedure for sampling hyperedges in the lattice shoud be given. ", "At least, you should explain how it is made efficient when the number of nodes is large.", "* Section 3.2. The method seems to be restricted to cases where the cardinality of hyperedges can take a small number of values. ", "This is discussed in Section 3.6 ", "but the discussion is not convincing enough.", "* Section 3.3 The term Sen2vec is not common knowledge", "* Section 3.3 The length of the sentences depends on the number of permutations of $k$ elements. ", "How can you deal with large k ?", "* Section 3.4 and Section 3.5. The methods proposed in these two sections should be related with previous works on hypergraph kernels. ", "I.e. there should be mentions on the clique expansion and star expansion of hypergraphs. ", "This leads to the question why graph embeddings methods on these expansions have not be considered in the paper.", "* Section 4.1. Only hyperedeges of cardinality in [2,6] are considered. ", "This seems a rather strong limitation ", "and this hypothesis does not seem pertinent in many applications. ", "* Section 4. For online multi-player games, hypernode embeddings only allow to evaluate existing teams, i.e. already existing as hyperedges in the input hypergraph. ", "One of the most important problem for multi-player games is team making where team evaluation should be made for all possible teams.", "* Section 5. Seems redundant with the Introduction." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "fact", "request", "request", "fact", "fact", "evaluation", "evaluation", "fact", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation" ]
rkFUZ2uxf
[ "The authors introduce a set of very simple tasks that are meant to illustrate the challenges of learning visual relations.", "They then evaluate several existing network architectures on these tasks,", "and show that results are not as impressive as others might have assumed they would be.", "They show that while recent approaches (e.g. relational networks) can generalize reasonably well on some tasks, these results do not generalize as well to held-out-object scenarios as might have been assumed.", "Clarity: The paper is fairly clearly written.", "I think I mostly followed it.", "Quality: I'm intrigued by but a little uncomfortable with the generalization metrics that the authors use.", "The authors estimate the performance of algorithms by how well they generalize to new image scenarios when trained on other image conditions.", "The authors state that \". . . the effectiveness of an architecture to learn visual-relation problems should be measured in terms of generalization over multiple variants of the same problem, not over multiple splits of the same dataset.\"", "Taken literally, this would rule out a lot of modern machine learning, even obviously very good work.", "On the other hand, it's clear that at some point, generalization needs to occur in testing ability to understand relationships.", "I'm a little worried that it's \"in the eye of the beholder\" whether a given generalization should be expected to work or not.", "There are essentially three scenarios of generalization discussed in the paper: (a) various generalizations of image parameters in the PSVRT dataset (b) various hold-outs of the image parameters in the sort-of-CLEVR dataset (c) from sort-of-CLEVR \"objects\" to PSVRT bit patterns", "The result that existing architectures didn't do very well at these generalizations (especially b and c) *may* be important -- or it may not.", "Perhaps if CNN+RN were trained on a quite rich real-world training set with a variety of real-world three-D objects beyond those shown in sort-of-CLEVR, it would generalize to most other situations that might be encountered.", "After all, when we humans generalize to understanding relationships, exactly what variability is present in our \"training sets\" as compared to our \"testing\" situations?", "How do the authors know that humans are effectively generalizing rather than just \"interpolating\" within their (very rich) training set?", "It's not totally clear to me that if totally naive humans (who had never seen spatial relationships before) were evaluated on exactly the training/testing scenarios described above, that they would generalize particularly well either.", "I don't think it can just be assumed a priori that humans would be super good this form of generalization.", "So how should authors handle this criticism?", "What would be useful would either be some form of positive control.", "Either human training data showing very effective generalization (if one could somehow make \"novel\" relationships unfamiliar to humans), or a different network architecture that was obviously superior in generalization to CNN+RN.", "If such were present, I'd rate this paper significantly higher.", "Also, I can't tell if I really fully believe the results of this paper.", "I don't doubt that the authors saw the results they report.", "However, I think there's some chance that if the same tasks were in the hands of people who *wanted* CNNs or CNN+RN to work well, the results might have been different.", "I can't point to exactly what would have to be different to make things \"work\",", "because it's really hard to do that ahead of actually trying to do the work.", "However, this suspicion on my part is actually a reason I think it might be *good* for this paper to be published at ICLR.", "This will give the people working on (e.g.) CNN+RN somewhat more incentive to try out the current paper's benchmarks and either improve their architecture or show that the the existing one would have totally worked if only tried correctly.", "I myself am very curious about what would happen and would love to see this exchange catalyzed.", "Originality and Significance: The area of relation extraction seems to me to be very important and probably a bit less intensively worked on that it should be.", "However, as the authors here note, there's been some recent work (e.g. Santoro 2017) in the area.", "I think that the introduction of baselines benchmark challenge datasets such as the ones the authors describe here is very useful, and is a somewhat novel contribution." ]
[ "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "non-arg", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation" ]
HJ3OcT3gG
[ "In this paper, the authors propose a novel method for generating adversarial examples when the model is a black-box and we only have access to its decisions (and a positive example). ", "It iteratively takes steps along the decision boundary while trying to minimize the distance to the original positive example.", "Pros:- Novel method that works under much stricter and more realistic assumptions.", "- Fairly thorough evaluation.", "- The paper is clearly written.", "Cons:- Need a fair number of calls to generate a small perturbation. ", "Would like to see more analysis of this.", "- Attack works for making something outside the boundary (not X), ", "but is less clear how to generate image to meet a specific classification (X). ", "3.2 attempts this slightly by using an image in the class, ", "but is less clear for something like FaceID.", "- Unclear how often the images generated look reasonable. ", "Do different random initializations given different quality examples?" ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "fact", "evaluation", "evaluation", "request" ]
Bk9Z3ZQlG
[ "The paper proposes LSD-NET, an active vision method for object classification. ", "In the proposed method, based on a given view of an object, the algorithm can decide to either classify the object or to take a discrete action step which will move the camera in order to acquire a different view of the object. ", "Following this procedure the algorithm iteratively moves around the object until reaching a maximum number of allowed moves or until a object view favorable for classification is reached.", "The main contribution of the paper is a hierarchical action space that distinguishes between camera-movement actions and classification actions. ", "At the top-level of the hierarchy, the algorithm decides whether to perform a movement or a classification -type action. ", "At the lower-level, the algorithm either assign a specific class label (for the case of classification actions) or performs a camera movement (for the case of camera-movement actions). ", "This hierarchical action space results in reduced bias towards classification actions.", "Strong Points - The content is clear and easy to follow.", "- The proposed method achieves competitive performance w.r.t. existing work.", "Weak Points- Some aspects of the proposed method could have been evaluated better.", "- A deeper evaluation/analysis of the proposed method is missing.", "Overall the proposed method is sound and the paper has a good flow and is easy to follow. ", "The proposed method achieves competitive results, ", "and up to some extent, shows why it is important to have the proposed hierarchical action space.", "My main concerns with this manuscript are the following:", "In some of the tables a LSTM variant? of the proposed method is mentioned. ", "However it is never introduced properly in the text. ", "Can you indicate how this LSTM-based method differs from the proposed method?", "At the end of Section 5.2 the manuscript states: \"In comparison to other methods, our method is agnostic of the starting point i.e. it can start randomly on any image and it would get similar testing accuracies.\" ", "This suggests that the method has been evaluated over different trials considering different random initializations. ", "However, this is unclear based on the evaluation protocol presented in Section 5. ", "If this is not the case, perhaps this is an experiment that should be conducted.", "In Section 3.2 it is mentioned that different from typical deep reinforcement learning methods, the proposed method uses a deeper AlexNet-like network. ", "In this context, it would be useful to drop a comment on the computation costs added in training/testing by this deeper model.", "Table 3 shows the number of correctly and wrongly classified objects as a function of the number of steps taken. ", "Here we can notice that around 50% of the objects are in the step 1 and 12, ", "which as correctly indicated by the manuscript, suggests that movement does not help for those cases. ", "Would it be possible to have more class-specific (or classes grouped into intermediate categories) visualization of the results? ", "This would provide a better insight of what is going on and when exactly actions related to camera movements really help to get better classification performance. ", "On the presentation side, I would recommend displaying the content of Table 3 in a plot. ", "This may display the trends more clearly. ", "Moreover, I would recommend to visualize the classification accuracy as a function of the step taken by the method. ", "In this regard, a deeper analysis of the effect of the proposed hierarchical action space is a must.", "I would encourage the authors to address the concerns raised on my review." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "fact", "evaluation", "request", "quote", "evaluation", "evaluation", "request", "fact", "request", "fact", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request", "request", "request" ]
rydWCNKxz
[ "Summary: This paper empirically studies adversarial perturbations dx and what the effects are of adversarial training (AT) with respect to shared (dx fools for many x) and singular (only for a single x) perturbations.", "Experiments use a (previously published) iterative fast-gradient-sign-method and use a Resnet on CIFAR.", "The authors conclude that in this experimental setting: - AT seems to defend models against shared dx's.", "- This is visible on universal perturbations,", "which become less effective as more AT is applied.", "- AT decreases the effectiveness of adversarial perturbations, e.g. AT decreases the number of adversarial perturbations that fool both an input x and x with e.g. a contrast change.", "- Singular perturbations are easily detected by a detector model,", "as such perturbations don't change much when applying AT.", "Pro:- Paper addresses an important problem: qualitative / quantitative understanding of the behavior of adversarial perturbations is still lacking.", "- The visualizations of universal perturbations as they change during AT are nice.", "- The basic observation wrt the behavior of AT is clearly communicated.", "Con:- The experiments performed are interesting directions, although unfocused and rather limited in scope.", "For instance, does the same phenomenon happen for different datasets?", "Different models?", "- What happens when we use adversarial attacks different from FGSM?", "Do we get similar results?", "- The papers lacks a more in-depth theoretical analysis.", "Is there a principled reason AT+FGSM defends against universal perturbations?", "Overall:- As is, it seems to me the paper lacks a significant central message (due to limited and unfocused experiments) or significant new theoretical insight into the effect of AT.", "A number of questions addressed are interesting starting points towards a deeper understanding of *how* the observations can be explained and more rigorous empirical investigations.", "Detailed: -" ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "evaluation", "request", "evaluation", "evaluation", "non-arg" ]
SJzxBpKeM
[ "SUMMARY: This work is about learning the validity of a sequences in specific application domains like SMILES strings for chemical compounds. ", "In particular, the main emphasis is on predicting if a prefix sequence could possibly be extended to a complete valid sequence. ", "In other words, one tries to predict if there exists a valid suffix sequence, and based on these predictions, the goal is to train a generative model that always produces valid sequences. ", "In the proposed reinforcement learning setting, a neural network models the probability that a certain action (adding a symbol) will result in a valid full sequence. ", "For training the network, a large set of (validity-)labelled sequences would be needed. ", "To overcome this problem, the authors introduce an active learning strategy, where the information gain is re-expressed as the conditional mutual information between the the label y and the network weights w, and this mutual information is maximized in a greedy sequential manner. ", "EVALUATION: CLARITY & NOVELTY: In principle, the paper is easy to read. ", "Unfortunately, however, for the reader is is not easy to find out what the authors consider their most relevant contribution. ", "Every single part of the model seems to be quite standard (basically a network that predicts the probability of a valid sequence and an information-gain based active learning strategy) ", "- so is the specific application to SMILES strings what makes the difference here? ", "Or is is the specific greedy approximation to the mutual information criterion in the active learning part? ", "Or is it the way how you augment the dataset? ", "All these aspects might be interesting, ", "but somehow I am missing a coherent picture.", "SIGNIFICANCE: it is not entirely clear to me if the proposed \"pruning\" strategy for the completion of prefix sequences can indeed be generally applied to sequence modelling problems, ", "because in more general domains it might be very difficult to come up with reasonable validity estimates for prefixes that are significantly shorter than the whole sequence. ", "I am not so familiar with SMILES strings ", "-- but could it be that the experimental success reported here is mainly a result of the very specific structure of valid SMILES strings? ", "But then, what can be learned for general sequence validation problems?" ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "request", "request" ]
HJ75M8ogM
[ "The authors have addressed the problem of translating natural language queries to SQL queries. ", "They proposed a deep neural network based solution which combines the attention based neural semantic parser and pointer networks. ", "They also released a new dataset WikiSQL for the problem. ", "The proposed method outperforms the existing semantic parsing baselines on WikiSQL dataset.", "Pros:1. The idea of using pointer networks for reducing search space of generated queries is interesting. ", "Also, using extrinsic evaluation of generated queries handles the possibility of paraphrasing SQL queries.", "2. A new dataset for the problem.", "3. The experiments report a significant boost in the performance compared to the baseline. ", "The ablation study is helpful for understanding the contribution of different component of the proposed method.", "Cons:1. It would have been better to see performance of the proposed method in other datasets (wherever possible). ", "This is my main concern about the paper.", "2. Extrinsic evaluation can slow down the overall training. ", "Comparison of running times would have been helpful.", "3. More details about training procedure (specifically for the RL part) would have been better." ]
[ "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "request", "request" ]
rkejdYtxz
[ "Summary:This work is about model evaluation for molecule generation and design. ", "19 benchmarks are proposed, small data sets are expanded to a large, standardized data set ", "and it is explored how to apply new RL techniques effectively for molecular design.", "on the positive side: The paper is well written, quality and clarity of the work are good. ", "The work provides a good overview about how to apply new reinforcement learning techniques for sequence generation. ", "It is investigated how several RL strategies perform on a large, standardized data set. ", "Different RL models like Hillclimb-MLE, PPO, GAN, A2C are investigated and discussed. ", "An implementation of 19 suggested benchmarks of relevance for de novo design will be provided as open source as an OpenAI Gym. ", "on the negative side: There is no new novel contribution on the methods side. ", "minor comments: Section 2.1. see Fig.2 —> see Fig.1", "page 4just before equation 8: the the" ]
[ "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "request" ]
ByCeFUNgz
[ "The authors propose a new episodic reinforcement learning algorithm based on contextual bandit oracles.", "The key specificity of this algorithm is its ability to deal with the credit assignment problem by learning automatically a progressive \"reward shaping\" (the residual losses) from a feedback that is only provided at the end of the epochs.", "The paper is dense but well written.", "The theoretical grounding is a bit thin or hard to follow.", "The authors provide a few regret theoretical results (that I did not check deeply) obtained by reduction to \"value-aware\" contextual bandits.", "The experimental section is solid.", "The method is evaluated on several RL environments against state of the art RL algorithms.", "It is also evaluated on bandit structured prediction tasks.", "An interesting synthetic experiment (Figure 4) is also proposed to study the ability of the algorithm to work on both decomposable and non-decomposable structured prediction tasks.", "Question 1: The credit assignment approach you propose seems way more sophisticated than eligibility traces in TD learning.", "But sometimes old and simple methods are not that bad.", "Could you develop a bit on the relation between RESLOPE and eligibility traces ?", "Question 2: RESLOPE is built upon contextual bandits which require a stationary environment.", "Does RESLOPE inherit from this assumption?", "Typos: page 1 \"scalar loss that output.\" -> \"scalar loss.\"", "\", effectively a representation\" -> \". By effective we mean effective in term of credit assignment.\"", "page 5 \"and MTR\" -> \"and DR\"", "page 6 \"in simultaneously.\" -> ???", "\".In greedy\" -> \". In greedy\"" ]
[ "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request" ]
rkSREOYgM
[ "This paper proposes a device placement algorithm to place operations of tensorflow on devices. ", "Pros: 1. It is a novel approach which trains the placement end to end.", "2. The experiments are solid to demonstrate this method works very well.", "3. The writing is easy to follow.", "4. This would be a very useful tool for the community if open sourced.", "Cons: 1. It is not very clear in the paper whether the training happens for each model yielding separate agents, or a shared agent is trained and used for all kinds of models. ", "The latter would be more exciting. ", "The adjacency matrix varies size for different graphs, ", "so I guess a separate agent is trained for each graph? ", "However, if the agent is not shared, why not just use integer to represent each operation in the graph, ", "since overfitting would be more desirable in this case.", "2. Averaging the embedding is hard to understand especially for the output sizes and number of outputs.", "3. It is not clear how the adjacency information is used." ]
[ "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "non-arg", "request", "fact", "evaluation", "evaluation" ]
Bycjn6tef
[ "Paper summary: Existing works on multi-task neural networks typically use hand-tuned weights for weighing losses across different tasks.", "This work proposes a dynamic weight update scheme that updates weights for different task losses during training time by making use of the loss ratios of different tasks.", "Experiments on two different network indicate that the proposed scheme is better than using hand-tuned weights for multi-task neural networks.", "Paper Strengths:- The proposed technique seems simple yet effective for multi-task learning.", "- Experiments on two different network architectures showcasing the generality of the proposed method.", "Major Weaknesses:- The main weakness of this work is the unclear exposition of the proposed technique.", "Entire technique is explained in a short section-3.1 with many important details missing.", "There is no clear basis for the main equations 1 and 2.", "How does equation-2 follow from equation-1?", "Where is the expectation coming from?", "What exactly does ‘F’ refer to?", "There is dependency of ‘F’ on only one of sides in equations 1 and 2?", "More importantly, how does the gradient normalization relate to loss weight update?", "It is very difficult to decipher these details from the short descriptions given in the paper.", "- Also, several details are missing in toy experiments.", "What is the task here?", "What are input and output distributions and what is the relation between input and output?", "Are they just random noises?", "If so, is the network learning to overfit to the data as there is no relationship between input and output?", "Minor Weaknesses:- There are no training time comparisons between the proposed technique and the standard fixed loss learning.", "- Authors claim that they operate directly on the gradients inside the network.", "But, as far as I understood, the authors only update loss weights in this paper.", "Did authors also experiment with gradient normalization in the intermediate CNN layers?", "- No comparison with state-of-the-art techniques on the experimented tasks and datasets.", "Clarifications:- See the above mentioned issues with the exposition of the technique.", "- In the experiments, why are the input images downsampled to 320x320?", "- What does it mean by ‘unofficial dataset’ (page-4).", "Any references here?", "- Why is 'task normalized' test-time loss as good measure for comparison between models in the toy example (Section 4)?", "The loss ratios depend on initial loss,", "which is not important for the final performance of the system.", "Suggestions:- I strongly suggest the authors to clearly explain the proposed technique to get this into a publishable state.", "- The term ’GradNorm’ seem to be not defined anywhere in the paper.", "Review Summary:Despite promising results, the proposed technique is quite unclear from the paper.", "With its poor exposition of the technique, it is difficult to recommend this paper for publication." ]
[ "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "evaluation", "fact", "non-arg", "non-arg", "non-arg", "non-arg", "fact", "fact", "fact", "non-arg", "fact", "non-arg", "non-arg", "non-arg", "non-arg", "non-arg", "fact", "evaluation", "request", "fact", "evaluation", "evaluation" ]
HyWGBr5lf
[ "Summary: The authors proposed an unsupervised time series clustering methods built with deep neural networks.", "The proposed model is equipped with an encoder-decoder and a clustering model.", "First, the encoder employs CNN to shorten the time series and extract local temporal features,", "and the CNN is followed by bidirectional LSTMs to get the encoded representations.", "A temporal clustering model and a DCNN decoder are applied on the encoded representations and jointly trained.", "An additional heatmap generator component can be further included in the clustering model.", "The authors compared the proposed method with hierarchical clustering with 4 different temporal similarity methods on several univariate time series datasets.", "Detailed comments:The problem of unsupervised time series clustering is important and challenging.", "The idea of utilizing deep learning models to learn encoded representations for clustering is interesting and could be a promising solution.", "One potential limitation of the proposed method is that it is only designed for univariate time series of the same temporal length,", "which limits the usage of this model in practice.", "In addition, given that the input has fixed length, clustering baselines for static data can be easily applied", "and should be compared to demonstrate the necessity of temporal clustering.", "Some important details are missing or lack of explanations.", "For example, what is the size of each layer and the dimension of the encoded space?", "How much does the model shorten the input time series and how is this be determined?", "How does the model combine the heatmap output (which is a sequence of the same length as the time series) and the clustering output (which is a vector of size K) in Figure 1?", "The heatmap shown in Figure 3 looks like the negation of the decoded output (i.e., lower value in time series -> higher value in heatmap).", "How do we interpret the generated heatmap?", "From the experimental results, it is difficult to judge which method/metric is the best.", "For example, in Figure 4, all 4 DTC-methods achieved the best performance on one or two datasets.", "Though several datasets are evaluated in experiments, they are relatively small.", "Even the largest dataset (Phalanges OutlinesCorrect) has only 2 thousand samples,", "and the best performance is achieved by one of the baseline, with AUC score only 0.586 for binary classification.", "Minor suggestion: In Figure 3, instead of showing the decoded output (reconstruction), it may be more helpful to visualize the encoded time series", "since the clustering method is applied directly on those encoded representations." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "non-arg", "non-arg", "non-arg", "evaluation", "non-arg", "evaluation", "fact", "evaluation", "fact", "fact", "request", "fact" ]
Hk7vlKsxz
[ "Summary: The authors present a simple variation of vanilla recurrent neural networks, which use ReLU hiddens and a fixed identity matrix that is added to the hidden-to-hidden weight matrix. ", "This identity connection acts as a “surrogate memory” component, preserving hidden activations over time steps. ", "The experiments demonstrate that this architecture reliably solves the addition task for up to 400 input frames. ", "It also achieves a very good performance on sequential and permuted MNIST and achieves SOTA performance on bAbI.", "The authors observe that the proposed recurrent identity network (RIN) is relatively robust to hyperparameter choices. ", "After Le et al. (2015), the paper presents another convincing case for the application of ReLUs in RNNs.", "Review: I very much like the paper. ", "The motivation and architecture is presented very clearly ", "and I am happy to also see explorations of simpler recurrent architectures in parallel to research of gated architectures!", "I have a few comments and questions:1) Clarification: In Section 2.2, do you really mean bit-wise multiplication or element-wise? ", "If bit-wise, can you elaborate why? ", "I might have missed something.", "2) Why does the learning curve of the IRNN stop around epoch 270 in Figure 2c? ", "Also some curves in the appendix stop abruptly without visible explosions. ", "Were these experiments run until completion? ", "If so, would it be possible to plot the complete curves?", "3) I think for a fair comparison with LSTMs and IRNNs a limited hyperparameter search should be performed separately on all three architectures at least for the addition task. ", "Optimal hyperparameters are usually model-specific. ", "Admittedly, the authors mention that they do not intend to make claims about superior performance to LSTMs, ", "however the competitive performance of small RINs is mentioned a couple of times in the manuscript.", "Le et al. (2015) for instance perform a coarse grid search for each model.", "4) I wouldn't say that ResNets are Gated Neural Networks, ", "as the branches are just summed up. ", "There is no (multiplicative) gating as in Highway Networks.", "5) I think what enables the training of very deep networks or LSTMs on long sequences is the presence of a (close-to-)identity component in forward/backward propagation, not the gating. ", "The use of ReLU activations in IRNNs (with identity initialization of the hidden-to-hidden weights) and RINs (effectively initialized with identity plus some noise) makes the recurrence more linear than with squashing activation functions.", "6) Regarding the absence of gating in RINs: What is your intuition on how the model would perform in tasks for which conditional forgetting is useful. ", "Consider for example a task with long sequences, outputs at every time step and hidden activations not necessarily being encouraged to estimate last step hidden activations. ", "Would RINs readily learn to reset parts of the hidden state?", "7) Henaff et al. (2016) might be related, ", "as they are also looking into the addition task with long sequences.", "Overall, the presented idea is novel to the best of my knowledge ", "and the manuscript is well-written. ", "I would recommend it for acceptance, ", "but would like to see the above points addressed (especially 1-3 and some comments on 4-6). ", "After a revision I would consider to increase the score.", "References: Henaff, Mikael, Arthur Szlam, and Yann LeCun. \"Recurrent orthogonal networks and long-memory tasks.\" In International Conference on Machine Learning, pp. 2034-2042. 2016.", "Le, Quoc V., Navdeep Jaitly, and Geoffrey E. Hinton. \"A simple way to initialize recurrent networks of rectified linear units.\" arXiv preprint arXiv:1504.00941 (2015)." ]
[ "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "request", "fact", "non-arg", "request", "request", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "request", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "reference", "reference" ]
SJEDvEvez
[ "This reviewer has found the proposed approach quite compelling, ", "but the empirical validation requires significant improvements:", "1) you should include in your comparison Query-by- Bagging & Boosting, ", "which are two of the best out-of-the-box active learning strategies", "2) in your empirical validation you have (arbitrarily) split the 14 datasets in 7 training and testing ones, ", "but many questions are still unanswered:", " - would any 7-7 split work just as well (ie, cross-validate over the 14 domains)", " - do you what happens if you train on 1, 2, 3, 8, 10, or 13 domains? are the results significantly different? ", "OTHER COMMENTS: - p3: both images in Figure 1 are labeled Figure 1.a", "- p3: typo \"theis\" --> \"this\" ", "Abe & Mamitsuksa (ICML-1998). Query Learning Strategies Using Boosting and Bagging." ]
[ "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "request", "request", "fact", "request", "reference" ]
BkOfh_eWM
[ "The paper discusses the problem of optimizing neural networks with hard threshold and proposes a novel solution to it.", "The problem is of significance because in many applications one requires deep networks which uses reduced computation and limited energy.", "The authors frame the problem of optimizing such networks to fit the training data as a convex combinatorial problems.", "However since the complexity of such a problem is exponential, the authors propose a collection of heuristics/approximations to solve the problem.", "These include, a heuristic for setting the targets at each layer, using a soft hinge loss, mini-batch training and such.", "Using these modifications the authors propose an algorithm (Algorithm 2 in appendix) to train such models efficiently.", "They compare the performance of a bunch of models trained by their algorithm against the ones trained using straight-through-estimator (SSTE) on a couple of datasets, namely, CIFAR-10 and ImageNet.", "They show superiority of their algorithm over SSTE.", "I thought the paper is very well written and provides a really nice exposition of the problem of training deep networks with hard thresholds.", "The authors formulation of the problem as one of combinatorial optimization", "and proposing Algorithm 1 is also quite interesting.", "The results are moderately convincing in favor of the proposed approach.", "Though a disclaimer here is that I'm not 100% sure that SSTE is the state of the art for this problem.", "Overall i like the originality of the paper and feel that it has a potential of reasonable impact within the research community.", "There are a few flaws/weaknesses in the paper though, making it somewhat lose.", "- The authors start of by posing the problem as a clean combinatorial optimization problem and propose Algorithm 1.", "Realizing the limitations of the proposed algorithm, given the assumptions under which it was conceived in,", "the authors relax those assumptions in the couple of paragraphs before section 3.1", "and pretty much throw away all the nice guarantees, such as checks for feasibility, discussed earlier.", "- The result of this is another algorithm (I guess the main result of the paper), which is strangely presented in the appendix as opposed to the main text, which has no such guarantees.", "- There is no theoretical proof that the heuristic for setting the target is a good one, other than a rough intuition", "- The authors do not discuss at all the impact on generalization ability of the model trained using the proposed approach.", "The entire discussion revolves around fitting the training set and somehow magically everything seem to generalize and not overfit." ]
[ "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation" ]
Bk8udEEeM
[ "Quick summary: This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term.", "The model is then trained with CelebA under different parameters settings and results are analyzed.", "Quality and significance: This is quite a technical paper, written in a very compressed form and is a bit hard to follow.", "Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models.", "Clarity: I would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented.", "Originality: Seems original.", "Pros: * Interesting energy formulation and variation over BEGAN", "Cons: * Not a clear paper", "* results are only partially motivated and analyzed" ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation" ]
ryVd3dFgf
[ "This paper proposes a method for parameter space noise in exploration.", "Rather than the \"baseline\" epsilon-greedy (that sometimes takes a single action at random)... this paper presents an method for perturbations to the policy.", "In some domains this can be a much better approach and this is supported by experimentation.", "There are several things to like about the paper:", "- Efficient exploration is a big problem for deep reinforcement learning (epsilon-greedy or Boltzmann is the de-facto baseline) ", "and there are clearly some examples where this approach does much better.", "- The noise-scaling approach is (to my knowledge) novel, good and in my view the most valuable part of the paper.", "- This is clearly a very practical and extensible idea... ", "the authors present good results on a whole suite of tasks.", "- The paper is clear and well written, ", "it has a narrative and the plots/experiments tend to back this up.", "- I like the algorithm, it's pretty simple/clean and there's something obviously *right* about it (in SOME circumstances).", "However, there are also a few things to be cautious of... and some of them serious:", "- At many points in the paper the claims are quite overstated. ", "Parameter noise on the policy won't necessarily get you efficient exploration... ", "and in some cases it can even be *worse* than epsilon-greedy... ", "if you just read this paper you might think that this was a truly general \"statistically efficient\" method for exploration (in the style of UCRL or even E^3/Rmax etc).", "- For instance, the example in 4.2 only works because the optimal solution is to go \"right\" in every timestep... ", "if you had the network parameterized in a different way (or the actions left/right were relabelled) then this parameter noise approach would *not* work... ", "By contrast, methods such as UCRL/PSRL and RLSVI https://arxiv.org/abs/1402.0635 *are* able to learn polynomially in this type of environment. ", "I think the claim/motivation for this example in the bootstrapped DQN paper is more along the lines of \"deep exploration\" ", "and you should be clear that your parameter noise does *not* address this issue.", "- That said I think that the example in 4.2 is *great* to include... ", "you just need to be more upfront about how/why it works and what you are banking on with the parameter-space exploration. ", "Essentially you perform a local exploration rule in parameter space... ", "and sometimes this is great ", "- but you should be careful to distinguish this type of method from other approaches. ", "This must be mentioned in section 4.2 \"does parameter space noise explore efficiently\" ", "because the answer you seem to imply is \"yes\" ... when the answer is clearly NOT IN GENERAL... but it can still be good sometimes ;D", "- The demarcation of \"RL\" and \"evolutionary strategies\" suggests a pretty poor understanding of the literature and associated concepts. ", "I can't really support the conclusion \"RL with parameter noise exploration learns more efficiently than both RL and evolutionary strategies individually\". ", "This sort of sentence is clearly wrong and for many separate reasons:", " - Parameter noise exploration is not a separate/new thing from RL... it's even been around for ages! ", "It feels like you are talking about DQN/A3C/(whatever algorithm got good scores in Atari last year) as \"RL\" and that's just really not a good way to think about it.", " - Parameter noise exploration can be *extremely* bad relative to efficient exploration methods ", "(see section 2.4.3 https://searchworks.stanford.edu/view/11891201)", "Overall, I like the paper, I like the algorithm ", "and I think it is a valuable contribution.", "I think the value in this paper comes from a practical/simple way to do policy randomization in deep RL.", "In some (maybe even many of the ones you actually care about) settings this can be a really great approach, especially when compared to epsilon-greedy.", "However, I hope that you address some of the concerns I have raised in this review.", "You shouldn't claim such a universal revolution to exploration / RL / evolution ", "because I don't think that it's correct.", "Further, I don't think that clarifying that this method is *not* universal/general really hurts the paper... ", "you could just add a section in 4.2 pointing out that the \"chain\" example wouldn't work if you needed to do different actions at each timestep (this algorithm does *not* perform \"deep exploration\").", "I vote accept." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "reference", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "request", "evaluation" ]
BkcUX-5eG
[ "This paper investigates learning representations for the problem of nearest neighbor (NN) search by exploring various deep learning architectural choices.", "The crux of the paper is the connection between NN and the angles between the closest neighbors --", "the higher this angle, more data points need to be explored for finding the nearest one, and thus more computational expense.", "Thus, the paper proposes to learn a network that tries to reduce the angles between the inputs and the corresponding class vectors in a supervised framework using softmax cross-entropy loss.", "Three architectural choices are investigated,", "(i) controlling the norm of output layers of the CNN (using batch norm essentially),", "(ii) removing relu so that the outputs are well-distributed in both positive and negative orthants,", "and (iii) normalizing the class vectors.", "Experiments are given on multiMNIST and Sports 1M and show improvements.", "Pros: 1) The paper explores different architectural choices for the deep network to some depth and show extensive results.", "2) The results do demonstrate clearly the advantage of the various choices and is useful", "3) The theoretical connections between data angles and query times are quite interesting,", "Cons: 1) Unclear Problem Statement.", "I find the problem statement a bit vague.", "Standard NN search finds a data point in the database closest to a query under some distance metric.", "While, the current paper uses the cosine similarity as the distance, the deep framework is trained on class vectors using cross-entropy loss.", "I do not think class labels are usually assumed to be given in the standard definition of NN,", "and it is not clear to me how the proposed setup can accommodate NN without class labels.", "Thus as such, I see this paper is perhaps proposing a classification problem and not an NN problem per se.", "2) Lacks Focus The paper lacks a good organization in my opinion.", "Things that are perhaps technically important are moved to the Appendix.", "For example, I find the theoretical part of the paper (e.g., Theorem 1) quite elegant and perhaps the main innovation in this paper.", "However, that is moved completely to the Appendix.", "So it cannot be really considered a contribution.", "It is also not clear if those theoretical results are novel.", "2) Disconnect/Unclear Assumptions There seems to be some disconnect between LSH and deep learning architectures explored in Sections 2 and 3 respectively.", "Are the assumptions used in the theoretical results for LSH also assumed in the deep networks?", "For example, as far as I know, the standard LSH works assumes the projection hyperplanes are randomly chosen and the theoretical results are based on such assumptions.", "It is not clear how a softmax output of a CNN, which is trained in a supervised way, follow such assumptions.", "It would be important if the paper could clarify such assumptions to make sure the sections are congruent.", "3) No Related Work", "There have been several efforts for adapting deep frameworks into KNN.", "The paper ignores all such works.", "Thus, it is not clear how significant is the proposed contribution.", "There are also not comparisons what-so-ever to competitive prior works.", "4) Novelty The main contribution of this paper is basically a set of experiments looking into architectural choices.", "However, the results of this study do not provide any surprises.", "It appears that batch normalization is essential for good performances,", "while using RELU is not so when one wants to use all directions for effective data encoding.", "Thus, as such, the novelty or the contributions of this paper are minor.", "Overall, while I find there are some interesting theoretical bits in this paper,", "it lacks focus,", "the experiments do not offer any surprises,", "and there are no comparisons with prior literature.", "Thus, I do not think this paper is ready to be accepted in its present form." ]
[ "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation" ]
HJyXsRtef
[ "This paper presents a new approach to determining what to measure and when to measure it, using a novel deep learning architecture.", "The problem addressed is important and timely", "and advances here may have an impact on many application areas outside medicine.", "The approach is evaluated on real-world medical datasets and has increased accuracy over the other methods compared against.", "+ A key advantage of the approach is that it continually learns from the collected data, using new measurements to update the model, and that it runs efficiently even on large real-world datasets.", "-However, the related work section is significantly underdeveloped, making it difficult to really compare the approach to the state of the art.", "The paper is ambitious and claims to address a variety of problems,", "but as a result each segment of related work seems to have been shortchanged.", "In particular, the section on missing data is missing a large amount of recent and related work.", "Normally, methods for handling missing data are categorized based on the missingness model (MAR/MCAR/MNAR).", "The paper seems to assume all data are missing at random, which is also a significant limitation of the methods.", "-The paper is organized in a nonstandard way, with the methods split across two sections, separated by the related work.", "It would be easier to follow with a more common intro/related work/methods structure.", "Questions: -One of the key motivations for the approach is sensing in medicine.", "However, many tests come as a group (e.g. the chem-7 or other panels).", "In this case, even if the only desired measurement is glucose, others will be included as well.", "Is it possible to incorporate this?", "It may change the threshold for the decision, as a combination of measures can be obtained for the same cost." ]
[ "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "request", "fact", "fact", "fact", "request", "evaluation" ]
S1jezarxG
[ "The paper offers a formal proof that gradient descent on the logistic loss converges very slowly to the hard SVM solution in the case where the data are linearly separable.", "This result should be viewed in the context of recent attempts at trying to understand the generalization ability of neural networks, which have turned to trying to understand the implicit regularization bias that comes from the choice of optimizer.", "Since we do not even understand the regularization bias of optimizers for the simpler case of linear models,", "I consider the paper's topic very interesting and timely.", "The overall discussion of the paper is well written,", "but on a more detailed level the paper gives an unpolished impression, and has many technical issues.", "Although I suspect that most (or even all) of these issues can be resolved, they interfere with checking the correctness of the results.", "Unfortunately, in its current state I therefore do not consider the paper ready for publication.", "Technical Issues: The statement of Lemma 5 has a trivial part and for the other part the proof is incorrect: Let x_u = ||nabla L(w(u))||^2.", "- Then the statement sum_{u=0}^t x_u < infinity is trivial,", "because it follows directly from ||nabla L(w(u))||^2 < infinity for all u.", "I would expect the intended statement to be sum_{u=0}^infinity x_u < infinity,", "which actually follows from the proof of the lemma.", "- The proof of the claim that t*x_t -> 0 is incorrect:", "sum_{u=0}^t x_u < infinity does not in itself imply that t*x_t -> 0, as claimed.", "For instance, we might have x_t = 1/i^2 when t=2^i for i = 1,2,... and x_t = 0 for all other t.", "Definition of tilde{w} in Theorem 4: - Why would tilde{w} be unique?", "In particular, if the support vectors do not span the space, because all data lie in the same lower-dimensional hyperplane, then this is not the case.", "- The KKT conditions do not rule out the case that \\hat{w}^top x_n = 1, but alpha_n = 0 (i.e. a support vector that touches the margin, but does not exert force against it).", "Such n are then included in cal{S}, but lead to problems in (2.7),", "because they would require tilde{w}^top x_n = infinity, which is not possible.", "In the proof of Lemma 6, case 2. at the bottom of p.14: - After the first inequality, C_0^2 t^{-1.5 epsilon_+} should be C_0^2 t^{-epsilon_+}", "- After the second inequality the part between brackets is missing an additional term C_0^2 t^{-\\epsilon_+}.", "- In addition, the label (1) should be on the previous inequality and it should be mentioned that e^{-x} <= 1-x+x^2 is applied for x >= 0 (otherwise it might be false).", "In the proof of Lemma 6, case 2 in the middle of p.15: - In the line of inequality (1) there is a t^{-epsilon_-} missing.", "In the next line there is a factor t^{-epsilon_-} too much.", "- In addition, the inequality e^x >= 1 + x holds for all x, so no need to mention that x > 0.", "In Lemma 1: - claim (3) should be lim_{t \\to \\infty} w(t)^\\top x_n = infinity", "- In the proof: w(t)^top x_n > 0 only holds for large enough t.", "Remarks: p.4 The claim that \"we can expect the population (or test) misclassification error of w(t) to improve\" because \"the margin of w(t) keeps improving\" is worded a little too strongly,", "because it presumes that the maximum margin solution will always have the best generalization error.", "In the proof sketch (p.3): - Why does the fact that the limit is dominated by gradients that are a linear combination of support vectors imply that w_infinity will also be a non-negative linear combination of support vectors?", "- \"converges to some limit\". Mention that you call this limit w_infinity", "Minor Issues: In (2.4): add \"for all n\".", "p.10, footnote: Shouldn't \"P_1 = X_s X_s^+\" be something like \"P_1 = (X_s^top X_s)^+\"?", "A.9: ell should be ell'", "The paper needs a round of copy editing.", "For instance: - top of p.4: \"where tilde{w} A is the unique\"", "- p.10: \"the solution tilde{w} to TO eq. A.2\"", "- p.10: \"might BOT be unique\"", "- p.10: \"penrose-moorse pseudo inverse\" -> \"Moore-Penrose pseudoinverse\"", "In the bibliography, Kingma and Ba is cited twice, with different years." ]
[ "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "request", "request", "request", "request", "request", "request", "request", "fact", "evaluation", "fact", "request", "request", "request", "request", "request", "request", "quote", "quote", "quote", "request", "fact" ]
BJ2J7pFgf
[ "This paper presents a method for classifying Tumblr posts with associated images according to associated single emotion word hashtags.", "The method relies on sentiment pre-processing from GloVe and image pre-processing from Inception.", "My strongest criticism for this paper is against the claim that Tumblr post represent self-reported emotions and that this method sheds new insight on emotion representation", "and my secondary criticism is a lack of novelty in the method,", "which seems to be simply a combination of previously published sentiment analysis module and previously published image analysis module, fused in an output layer.", "The authors claim that the hashtags represent self-reported emotions,", "but this is not true in the way that psychologists query participants regarding emotion words in psychology studies.", "Instead these are emotion words that a person chooses to broadcast along with an associated announcement.", "As the authors point out, hashtags and words may be used sarcastically or in different ways from what is understood in emotion theory.", "It is quite common for everyday people to use emotion words this way e.g. using #love to express strong approval rather than an actual feeling of love.", "In their analysis the authors claim:", "“The 15 emotions retained were those with high relative frequencies on Tumblr among the PANAS-X scale (Watson & Clark, 1999)”.", "However five of the words the authors retain: bored, annoyed, love, optimistic, and pensive are not in fact found in the PANAS-X scale:", "Reference: The PANAS-X Scale: https://wiki.aalto.fi/download/attachments/50102838/PANAS-X-scale_spec.pdf", "Also the longer version that the authors cited:", "https://www2.psychology.uiowa.edu/faculty/clark/panas-x.pdf", "It should also be noted that the PANAS (Positive and Negative Affect Scale) scale and the PANAS-X (the “X” is for eXtended) scale are questionnaires used to elicit from participants feelings of positive and negative affect,", "they are not collections of \"core\" emotion words,", "but rather words that are colloquially attached to either positive or negative sentiment.", "For example PANAS-X includes words like:“strong” ,“active”, “healthy”, “sleepy” which are not considered emotion words by psychology.", "If the authors stated goal is \"different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment\" they should be aware that this is exactly what PANAS is designed to do -", "not to infer the latent emotional state of a person, except to the extent that their affect is positive or negative.", "The work of representing emotions had been an field in psychology for over a hundred years", "and it is still continuing.", "https://en.wikipedia.org/wiki/Contrasting_and_categorization_of_emotions.", "One of the most popular theories of emotion is the theory that there exist “basic” emotions: Anger, Disgust, Fear, Happiness (enjoyment), Sadness and Surprise", "(Paul Ekman, cited by the authors).", "These are short duration sates lasting only seconds.", "They are also fairly specific,", "for example “surprise” is sudden reaction to something unexpected,", "which is it exactly the same as seeing a flower on your car and expressing “what a nice surprise.”", "The surprise would be the initial reaction of “what’s that on my car? Is it dangerous?”", "but after identifying the object as non-threatening, the emotion of “surprise” would likely pass and be replaced with appreciation.", "The Circumplex Model of Emotions (Posner et al 2005) the authors refer to actually stands in opposition to the theories of Ekman.", "From the cited paper by Posner et al :", "\"The circumplex model of affect proposes that all affective states arise from cognitive interpretations of core neural sensations that are the product of two independent neurophysiological systems. This model stands in contrast to theories of basic emotions, which posit that a discrete and independent neural system subserves every emotion.\"", "From my reading of this paper, it is clear to me that the authors do not have a clear understanding of the current state of psychology’s view of emotion representation", "and this work would not likely contribute to a new understanding of the latent structure of peoples’ emotions.", "In the PCA result, it is not \"clear\" that the first axis represents valence,", "as \"sad\" has a slight positive on this scale", "and \"sad\" is one of the emotions most clearly associated with negative valence.", "With respect to the rest of the paper, the level of novelty and impact is \"ok, but not good enough.\"", "This analysis does not seem very different from Twitter analysis,", "because although Tumblr posts are allowed to be longer than Twitter posts,", "the authors truncate the posts to 50 characters.", "Additionally, the images do not seem to add very much to the classification.", "The authors algorithm also seems to be essentially a combination of two other, previously published algorithms.", "For me the novelty of this paper was in its application to the realm of emotion theory,", "but I do not feel there is a contribution here.", "This paper is more about classifying Tumblr posts according to emotion word hashtags than a paper that generates a new insights into emotion representation or that can infer latent emotional state." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "quote", "fact", "reference", "fact", "reference", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "reference", "fact", "reference", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "quote", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation" ]
ByRWmWAxM
[ "This paper proposes an extremely simple methodology to improve the network's performance by adding extra random perturbations (resizing/padding) at evaluation time.", "Although the paper is very basic, ", "it creates a good baseline for defending about various types of attacks and got good results in kaggle competition.", "The main merit of the paper is to study this simple but efficient baseline method extensively and shows how adversarial attacks can be mitigated by some extent.", "Cons of the paper: there is not much novel insight or really exciting new ideas presented.", "Pros: It gives a convincing very simple baseline ", "and the evaluation of all subsequent results on defending against adversaries will need to incorporate this simple defense method in addition to any future proposed defenses, ", "since it is very easy to implement and evaluate and seems to improve the defense capabilities of the network to a significant degree. ", "So I assume that this paper will be influential in the future just by the virtue of its easy applicability and effectiveness." ]
[ "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation" ]
B1eq0Hqlz
[ "The authors investigate a modified input layer that results in color invariant networks. ", "The proposed methods are evaluated on two car datasets. ", "It is shown that certain color invariant \"input\" layers can improve accuracy for test-images from a different color distribution than the training images.", "The proposed assumptions are not well motivated and seem arbitrary. ", "Why is using a permutation of each pixels' color a good idea?", "The paper is very hard to read. ", "The message is unclear ", "and the experiments to prove it are of very limited scope, i.e. one small dataset with the only experiment purportedly showing generalization to red cars.", "Some examples of specific issues:- the abstract is almost incomprehensible and it is not clear what the contributions are", "- Some references to Figures are missing the figure number, eg. 3.2 first paragraph, ", "- It is not clear how many input channels the color invariant functions use, eg. p1 does it use only one channel and hence has fewer parameters?", "- are the training and testing sets all disjoint (sec 4.3)?", "- at random points figures are put in the appendix, even though they are described in the paper and seem to show key results (eg \"tested on nored-test\")", "- Sec 4.6: The explanation for why the accuracy drops for all models is not clear. ", "Is it because the total number of training images drops? If that's the case the whole experimental setup seems flawed.", "- Sec 4.6: the authors refer to the \"order net\" beating the baseline, ", "however, from Fig 8 (right most) it appears as if all models beat the baseline. ", "In the conclusion they say that weighted order net beats the baseline on all three test sets w/o red cars in the training set. ", "Is that Fig 8 @0%? ", "The baseline seems to be best performing on \"all cars\" and \"non-red cars\"", "In order to be at an appropriate level for any publication the experiments need to be much more general in scope." ]
[ "fact", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "non-arg", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "non-arg", "fact", "request" ]
B1RZJ1cxG
[ "The paper explores momentum SGD and an adaptive version of momentum SGD which the authors name YF (Yellow Fin).", "They compare YF to hand tuned momentumSGD and to Adam in several deep learning applications.", "I found the first part which discusses the theoretical motivation behind YF to be very confusing and misleading:", "Based on the analysis of 1-dimensional problems, the authors design a framework and an algorithm that supposedly ensures accelerated convergence.", "There are two major problems with this approach:-First: Exploring 1-dim functions is indeed a nice way to get some intuition.", "Yet, algorithms that work in the 1-dim case do not trivially generalize to high dimensions,", "and such reasoning might lead to very bad solutions.", "-Second: Accelerated GD does not benefit over GD in the 1-dim case.", "And therefore, this is not an appropriate setting to explore acceleration.", "Concretely, the definition of the generalized condition number $\\nu$, and relating it to the standard definition of the condition number $\\kappa$, is very misleading.", "This is since $\\kappa =1$ for 1-dim problems,", "and therefore accelerated GD does not have any benefits over non accelerated GD in this case.", "However, $\\nu$ might be much larger than 1 even in the 1-dim case.", "Regarding the algorithm itself: there are too many hyper-parameters (which depend on each other) that are tuned (per-dimension).", "And as I have mentioned, the design of the algorithm is inspired by the analysis of 1-dim quadratic functions.", "Thus, it is very hard for me to believe that this algorithm works in practice unless very careful fine tuning is employed.", "The authors mention that their experiments were done without tuning or with very little tuning, which is very mysterious for me.", "In contrast to the theoretical part, the experiments seems very encouraging.", "Showing YF to perform very well on several deep learning tasks without (or with very little) tuning.", "Again, this seems a bit magical or even too good to be truth.", "I suggest the authors to perform a experiment with say a qaudratic high dimensional function, which is not aligned with the axes in order to illustrate how their method behaves and try to give intuition." ]
[ "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request" ]
B1qhp-qeG
[ "The paper investigates the iterative estimation view on gated recurrent networks (GNN). ", "Authors observe that the average estimation error between a given hidden state and the last hidden state gradually decreases toward zeros. ", "This suggest that GNN are bias toward an identity mapping and learn to preserve the activation through time.", "Given this observation, authors then propose RIN, a new RNN parametrization where the hidden to hidden matrix is decomposed as a learnable weight matrix plus the identity matrix.", "Authors evaluate their RIN on the adding, sequential MNIST and the baby tasks and show that their IRNN outperforms the IRNN and LSTM models.", "Questions:- Section 2 suggests that use of the gate in GNNs encourages to learn an identity mapping. ", "Does the average iteration error behaves differently in case of a tanh-RNN ?", "- It seems from Figure 4 (a) that the average estimation error is higher for RIN than IRNN and LSTM and only decrease toward zero at the very end.", "What could explain this phenomenon?", "- While the LSTM baseline matches the results of Le et al., ", "later work such as Recurrent Batch Normalization or Unitary Evolution RNN have demonstrated much better performance with a vanilla LSTM on those tasks (outperforming both IRNN and RIN). ", "What could explain this difference in the performances?", "- Unless I am mistaken, Gated Orthogonal Recurrent Units: On Learning to Forget from Jing et al. also reports better performances for the LSTM (and GRU) baselines that outperform RIN on the baby tasks with mean performances of 58.2 and 56.0 for GRU and LSTM respectively?", "- Quality/Clarity:The paper is well written and pleasant to read", "- Originality:Looking at RNN from an iterative refinement point of view seems novel.", "- Significance:While looking at RNN from an iterative estimation is interesting, ", "the experimental part does not really show what are the advantages of the propose RIN. ", "In particular, the LSTM baseline seems to weak compared to other works." ]
[ "fact", "fact", "fact", "fact", "fact", "fact", "non-arg", "fact", "non-arg", "fact", "fact", "non-arg", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact" ]
HyM5JWhgG
[ "This paper discusses an application of survival analysis in social networks.", "While the application area seems to be pertinent, the statistics as presented in this paper are suboptimal at best.", "There is no useful statistical setup described (what is random? etc etc),", "the interplay between censoring and end-of-life is left rather fuzzy,", "and mentioned clustering approaches are extensively studied in the statistical literature in so-called frailty analysis.", "The setting is also covered in statistics in the extensive literature on repeated measurements and even time-series analysis.", "It's up to the authors discuss similarities and differences of results of the present approach and those areas.", "The numerical result is not assessing the different design decisions of the approach (why use a Kuyper loss?) in this empirical paper." ]
[ "fact", "evaluation", "fact", "evaluation", "fact", "fact", "request", "evaluation" ]
Hyu5lW5xf
[ "This paper proposes a method, Dual-AC, for optimizing the actor(policy) and critic(value function) simultaneously which takes the form of a zero-sum game resulting in a principled method for using the critic to optimize the actor. ", "In order to achieve that, they take the linear programming approach of solving the bellman optimality equations, outline the deficiencies of this approach, and propose solutions to mitigate those problems. ", "The discussion on the deficiencies of the naive LP approach is mostly well done. ", "Their main contribution is extending the single step LP formulation to a multi-step dual form that reduces the bias and makes the connection between policy and value function optimization much clearer without loosing convexity by applying a regularization. ", "They perform an empirical study in the Inverted Double Pendulum domain to conclude that their extended algorithm outperforms the naive linear programming approach without the improvements. ", "Lastly, there are empirical experiments done to conclude the superior performance of Dual-AC in contrast to other actor-critic algorithms. ", "Overall, this paper could be a significant algorithmic contribution, with the caveat for some clarifications on the theory and experiments. ", "Given these clarifications in an author response, I would be willing to increase the score. ", "For the theory, there are a few steps that need clarification and further clarification on novelty. ", "For novelty, it is unclear if Theorem 2 and Theorem 3 are both being stated as novel results. ", "It looks like Theorem 2 has already been shown in \"Randomized Linear Programming Solves the Discounted Markov Decision Problem in Nearly-Linear Running Time”. ", "There is a statement that “Chen & Wang (2016); Wang (2017) apply stochastic first-order algorithms (Nemirovski et al., 2009) for the one-step Lagrangian of the LP problem in reinforcement learning setting. However, as we discussed in Section 3, their algorithm is restricted to tabular parametrization”. ", "Is you Theorem 2 somehow an extension? ", "Is Theorem 3 completely new?", "This is particularly called into question due to the lack of assumptions about the function class for value functions. ", "It seems like the value function is required to be able to represent the true value function, ", "which can be almost as restrictive as requiring tabular parameterizations (which can represent the true value function). ", "This assumption seems to be used right at the bottom of Page 17, where U^{pi*} = V^*. ", "Further, eta_v must be chosen to ensure that it does not affect (constrain) the optimal solution, ", "which implies it might need to be very small. ", "More about conditions on eta_v would be illuminating. ", "There is also one step in the theorem that I cannot verify. ", "On Page 18, how is the squared removed for difference between U and Upi? ", "The transition from the second line of the proof to the third line is not clear. ", "It would also be good to more clearly state on page 14 how you get the first inequality, for || V^* ||_{2,mu}^2. ", "For the experiments, the following should be addressed.", "1. It would have been better to also show the performance graphs with and without the improvements for multiple domains.", "2. The central contribution is extending the single step LP to a multi-step formulation. ", "It would be beneficial to empirically demonstrate how increasing k (the multi-step parameter) affects the performance gains.", "3. Increasing k also comes at a computational cost. ", "I would like to see some discussions on this and how long dual-AC takes to converge in comparison to the other algorithms tested (PPO and TRPO).", "4. The authors concluded the presence of local convexity based on hessian inspection due to the use of path regularization. ", "It was also mentioned that increasing the regularization parameter size increases the convergence rate. ", "Empirically, how does changing the regularization parameter affect the performance in terms of reward maximization? ", "In the experimental section of the appendix, it is mentioned that multiple regularization settings were tried but their performance is not mentioned. ", "Also, for the regularization parameters that were tried, based on hessian inspection, did they all result in local convexity? ", "A bit more discussion on these choices would be helpful. ", "Minor comments:1. Page 2: In equation 5, there should not be a 'ds' in the dual variable constraint" ]
[ "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "quote", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request", "request", "request", "evaluation", "request", "fact", "request", "fact", "fact", "request", "fact", "request", "request", "request" ]