Datasets:

Languages:
English
Multilinguality:
monolingual
Annotations Creators:
expert-generated
Tags:
License:
xyhua commited on
Commit
fa73327
1 Parent(s): 738cc77

upload validation set for ampere

Browse files
Files changed (1) hide show
  1. ampere_label_val.jsonl +20 -0
ampere_label_val.jsonl ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"doc_id": "Hk9a7-qlG", "text": ["The submission tackles an important problem of learning and transferring multiple motor skills. ", "The approach relies on using an embedding space defined by latent variables and entropy-regularized policy gradient / variational inference formulation that encourages diversity and identifiability in latent space.", "The exposition is clear and the method is well-motivated. ", "I see no issues with the mathematical correctness of the claims made in the paper. ", "The experimental results are both instructive of how the algorithm operates (in the particle example), and contain impressive robotic results. ", "I appreciated the experiments that investigated cases where true number of tasks and the parameter T differ, showing that the approach is robust to choice of T.", "The submission focuses particularly on discrete tasks and learning to sequence discrete tasks (as training requires a one-hot task ID input). ", "I would like a bit of discussion on whether parameterized skills (that have continuous space of target location, or environment parameters, for example) can be supported in the current formulation, and what would be necessary if not.", "Overall, I believe this is in interesting piece of work at a fruitful intersection of reinforcement learning and variational inference, ", "and I believe would be of interest to ICLR community."], "labels": ["evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation"]}
2
+ {"doc_id": "BJSfkUNzf", "text": ["The paper addresses the problem of tensor decomposition ", "which is relevant and interesting. ", "The paper proposes Tensor Ring (TR) decomposition which improves over and bases on the Tensor Train (TT) decomposition method. ", "TT decomposes a tensor in to a sequences of latent tensors where the first and last tensors are a 2D matrices. ", "The proposed TR method generalizes TT in that the first and last tensors are also 3rd-order tensors instead of 2nd-order. ", "I think such generalization is interesting ", "but the innovation seems to be very limited. ", "The paper develops three different kinds of solvers for TR decomposition, i.e., SVD, ALS and SGD. ", "All of these are well known methods. ", "Finally, the paper provides experimental results on synthetic data (3 oscillated functions) and image data (few sampled images). ", "I think the paper could be greatly improved by providing more experiments and ablations to validate the benefits of the proposed methods.", "Please refer to below for more comments and questions.", "Pros:1. The topic is interesting.", "2. The generalization over TT makes sense.", "Cons: 1. The writing of the paper could be improved and more clear: ", "the conclusions on inner product and F-norm can be integrated into \"Theorem 5\". ", "And those \"theorems\" in section 4 are just some properties from previous definitions; ", "they are not theorems. ", "2. The property of TR decomposition is that the tensors can be shifted (circular invariance). ", "This is an interesting property and it seems to be the major strength of TR over TT. ", "I think the paper could be significantly improved by providing more applications of this property in both theory and experiments.", "3. As the number of latent tensors increase, the ALS method becomes much worse approximation of the original optimization. ", "Any insights or results on the optimization performance vs. the number of latent tensors?", "4. Also, the paper mentions Eq. 5 (ALS) is optimized by solving d subproblems alternatively. ", "I think this only contains a single round of optimization. ", "Should ALS be applied repeated (each round solves d problems) until convergence?", "5. What is the memory consumption for different solvers?", "6. SGD also needs to update at least d times for all d latent tensors. ", "Why is the complexity O(r^3) independent of the parameter d?", "7. The ALS is so slow (if looking at the results in section 5.1), which becomes not practical. ", "The experimental part could be improved by providing more results and description about a guidance on how to choose from different solvers.", "8. What does \"iteration\" mean in experimental results such as table 2? ", "Different algorithms have different cost for \"each iteration\" ", "so comparing that seems not fair. ", "The results could make more sense by providing total time consumptions and time cost per iteration. ", "also applies to table 4.", "9. Why is the \\epsion in table 3 not consistent? ", "Why not choose \\epsion = 9e-4 and \\epsilon=2e-15 for tensorization?", "10. Also, table 3 could be greatly improved by providing more ablations such as results for (n=16, d=8), (n=4, d=4), etc. ", "That could help readers to better understand the effect of TR.", "11. Section 5.3 could be improved by providing a curve (compression vs. error) instead of just providing a table of sampled operating points.", "12. The paper mentions the application of image representation but only experiment on 32x32 images. ", "How does the proposed method handle large images? ", "Otherwise, it does not seem to be a practical application.", "13. Figure 5: Are the RSE measures computed over the whole CIFAR-10 dataset or the displayed images?", "Minor: - Typo: Page 4 Line 7 \"Note that this algorithm use the similar strategy\": use -> uses"], "labels": ["fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "request", "non-arg", "evaluation", "evaluation", "request", "request", "fact", "fact", "fact", "evaluation", "request", "fact", "non-arg", "fact", "fact", "request", "request", "fact", "request", "evaluation", "request", "request", "fact", "evaluation", "request", "request", "request", "request", "request", "evaluation", "request", "fact", "request", "evaluation", "request", "request"]}
3
+ {"doc_id": "r18RxrXlG", "text": ["The authors present Hilbert-CNN, a convolutional neural network for DNA sequence classification. ", "Unlike existing methods, their model does not use the raw one-dimensional (1D) DNA sequence as input, but two-dimensional (2D) images obtained by mapping sequences to images using spacing-filling Hilbert-Curves. ", "They further present a model (Hilbert-CNN) that is explicitly designed for Hilbert-transformed DNA sequences. ", "The authors show that their approach can increase classification accuracy and decrease training time when applied to predicting histone-modification marks and splice junctions. ", "Major comments=============1. The motivation of transforming sequences into images is unclear ", "and claimed benefits are not sufficiently supported by experiments. ", "The essence of deep neural networks is to learn a hierarchy of features from the raw data instead of engineering features manually. ", "Using space filling methods such as Hilbert-curves to transform (DNA) sequences into images can be considered as unnecessary feature-engineering. ", "The authors claim that \u2018CNNs have proven to be most powerful when operating on multi-dimensional input, such as in image classification\u2019, which is wrong. ", "Sequence-based convolutional and recurrent models have been successfully applied for modeling natural languages (translation, sentiment classification, \u2026), acoustic signals (speech recognition, audio generation), or biological sequences (e.g. predicting various epigenetic marks from DNA as reviewed in Angermueller et al). ", "They further claim that their method can \u2018better take the spatial features of DNA sequences into account\u2019 and can better model \u2018long-term interactions\u2019 between distant regions. ", "This is not obvious ", "since Hilbert-curves map adjacent sequence characters to pixels that are close to each other as described by the authors, but distant characters to distant pixels. ", "Hence, 2D CNN must be deep enough for modeling interactions between distant image features, in the same way as a 1D CNN.", "Transforming sequences to images has several drawbacks. ", "1) Since the resulting images have a small width and height but many channels, ", "existing 2D CNNs such as ResNet or Inception can not be applied, ", "which also required the authors to design a specific model (Hilbert-CNN). ", "2) Hilbert-CNN requires more memory due to empty image regions. ", "3) Due to the high number of channels, convolutional filters have more parameters. ", "4) The sequence-to-image transformation makes model-interpretability hard, which is in particular important in biology. ", "For example, motifs of the first convolutional layers can not be interpreted as sequence motifs (as described in Angermueller et al) ", "and it is unclear how to analyze the influence of sequence characters using attention or gradient-based methods.", "The authors should more clearly motivate their model in the introduction, tone-down the benefit of sequence-to-image transformations, and discuss drawbacks of their model. ", "This requires major changes of introduction and discussion.", "2. The authors should more clearly describe which and how they optimized hyper-parameters. ", "The authors should optimize the most important hyper-parameters of their model (learning rate, batch size, weight decay, max vs. average pooling, ELU vs. ReLU, \u2026) and baseline models on a holdout validation set. ", "The authors should also report the validation accuracy for different sequence lengths, k-mer sizes, and space filling functions. ", "Can their model be applied to longer sequences (>= 1kbp) which had been shown to improve performance (e.g. 10.1101/gr.200535.115)? ", "Does Figure 4 show the performance on the training, validation, or test set?", "3. It is unclear if the performance gain is due the proposed sequence-to-image transformation, or due to the proposed network architecture (Hilbert-CNN). ", "It is also unclear if Hilbert-CNNs are applicable to DNA sequence classification tasks beyond predicting chromatin states and splice junctions. ", "To address these points, the authors should compare Hilbert-CNN to models of the same capacity (number of parameters) and optimize hyper-parameters (k-mer size, convolutional filter size, learning rate, \u2026) in the same way as they did for Hilbert-CNN. ", "The authors should report the number of parameters of all models (Hilbert-CNN, Seq-CNN, 1D-sequence-CNN (Table 5), and LSTM (Table 6), \u2026) in an additional table. ", "The authors should also compare Hilbert-CNN to the DanQ architecture on predicting epigenetic markers using the same dataset as reported in the DanQ publication (DOI: 10.1093/nar/gkw226). ", "The authors should also compare Hilbert-CNNs to gapped-kmer SVM, a shallow model that had been successfully applied for genomic prediction tasks.", "4. The authors should report the AUC and area under precision-recall curve (APR) in additional to accuracy (ACC) in Table 3.", "5. It is unclear how training time was measured for baseline models (Seq-CNN, LSTM, \u2026). ", "The authors should use the same early stopping criterion as they used for training Hilber-CNNs. ", "The authors should also report the training time of SVM and gkm-SVM (see comment 3) in Table 3.", "Minor comments=============1. The authors should avoid uninformative adjectives and clutter throughout the manuscript, for example \u2018DNA is often perceived\u2019, \u2018Chromatin can assume\u2019, \u2018enlightening\u2019, \u2018very\u2019, \u2018we first have to realize\u2019, \u2018do not mean much individually\u2019, \u2018very much like the tensor\u2019, \u2018full swing\u2019, \u2018in tight communication\u2019, \u2018two methods available in the literature\u2019.", "The authors should point out in section two that k-mers can be overlapping.", "2. Section 2.1: One-hot vectors is not the only way for embedding words. ", "The authors should also mention Glove and word2vec. ", "Similar approaches had been applied to protein sequences ", "(DOI: 10.1371/journal.pone.0141287)", "3. The authors should more clearly describe how Hilbert-curves map sequences to images and how images are cropped. ", "What does \u2018that is constructed in a recursive manner\u2019 mean? ", "Simply cropping the upper half of Figure 1c would lead to two disjoint sequences. ", "What is the order of Figure 1e?", "4. The authors should consistently use \u2018channels\u2019 instead of \u2018full vector of length\u2019 to denote the dimensionality of image pixels.", "5. The authors should use \u2018Batch norm\u2019 instead of \u2018BN\u2019 in Figure 2 for clarification.", "6. Hilber-CNN is similar to ResNet ", "(DOI: 10.1371/journal.pone.0141287), ", "which consists of multiple \u2018residual blocks\u2019, where each block is a sequence of \u2018residual units\u2019. ", "A \u2018computational block\u2019 in Hilbert-CNN contains two parallel \u2018residual blocks\u2019 (Figure 3) instead of a sequence of \u2018residual units\u2019. ", "The authors should use \u2018residual block\u2019 instead of \u2018computational block\u2019, and \u2018residual units\u2019 as in the original ResNet publication. ", "The authors should also motivate why two residual units/blocks are applied parallely instead of sequentially.", "7. Caption table 1: the authors should clarify if \u2018Output size\u2019 is \u2018height, width, channels\u2019, and explain the notation in \u2018Description\u2019 (or refer to the text.)"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "request", "request", "request", "request", "request", "evaluation", "evaluation", "request", "request", "request", "request", "request", "evaluation", "request", "request", "request", "request", "fact", "request", "fact", "reference", "request", "request", "fact", "request", "request", "request", "evaluation", "reference", "fact", "fact", "request", "request", "request"]}
4
+ {"doc_id": "H1NffmKgz", "text": ["This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). ", "The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, ", "and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games. ", "Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well. ", "The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training. ", "They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10. ", "This paper is relatively well-written and clear, ", "and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games. ", "Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.", "The issue of last iterate versus average iterate for non convex-concave problems is also presented well. ", "The theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting ", "as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013. ", "Moreover, the result is only at best a motivation for using OMD in WGAN training ", "since the WGAN optimization problem is not a bilinear game. ", "The experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, ", "although they seem less compelling than the prior discussion in the paper would suggest. ", "In particular, they are matched by SGD with momentum when evaluated by last epoch performance ", "(albeit while being less sensitive to learning rates). ", "OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, ", "but there doesn't seem to be even an attempt at explaining this in the paper. ", "I found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, ", "and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, ", "despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5. ", "This gives the impression that the experimental results might have been at least slightly \"gamed\". ", "For the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality.", "Specific comments:Page 1, \"no-regret dynamics in zero-sum games can very often lead to limit cycles\": ", "I don't think limit cycles are actually ever formally defined in the entire paper. ", "Page 3, \"standard results in game theory and no-regret learning\": ", "These results should be either proven or cited.", "Page 3: Don't the parameter spaces need to be bounded for these convergence results to hold? ", "Page 4, \"it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm\": ", "For completeness, this should probably either be (quickly) proven or a reference should be provided.", "Page 5, \"the unique equilibrium of the above game is...for the discriminator to choose w=0\": ", "Why is w=0 necessary here?", "Page 6, \"We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A\": ", "Why is this true? ", "This should either be proven or cited.", "Page 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result. ", "In the Initialization section, it appears simply to be out of convenience.", "Page 6, Theorem 1: It should be explicitly stated that this result doesn't provide a convergence rate, in contrast to the existing OMD results cited in the paper. ", "Page 7, \"we considered momentum, Nesterov momentum and AdaGrad\": ", "Why isn't Adam used in this section if it is used in later experiments?", "Page 7-8, \"When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.\": ", "Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set? ", "None of the theoretical arguments presented earlier in the paper seem to even hint at this. ", "The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch. ", "However, this doesn't even really hold, ", "since there exist learning rates in which SGD with momentum matches the performance of SOMD.", "Page 8, \"Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants\": ", "Learning rate sensitivity doesn't seem to be touched upon in the earlier discussion. ", "Can these results be explained by theory?", "Page 8, \"we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training\": ", "These results don't mean much without error bars.", "Page 8, \"we only trained the discriminator once after one iteration of generator training. The latter is inline with the intuition behind the use of optimism....\": ", "Why didn't this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times?"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "quote", "fact", "quote", "request", "fact", "quote", "request", "quote", "request", "quote", "evaluation", "request", "request", "evaluation", "request", "quote", "evaluation", "quote", "request", "evaluation", "fact", "fact", "fact", "quote", "fact", "non-arg", "quote", "evaluation", "quote", "non-arg"]}
5
+ {"doc_id": "ryOWEcdlM", "text": ["This paper studies the critical points of shallow and deep linear networks.", "The authors give a (necessary and sufficient) characterization of the form of critical points and use this to derive necessary and sufficient conditions for which critical points are global optima.", "Essentially this paper revisits a classic paper by Baldi and Hornik (1989) and relaxes a few requires assumptions on the matrices.", "I have not checked the proofs in detail but the general strategy seems sound.", "While the exposition of the paper can be improved", "in my view this is a neat and concise result and merits publication in ICLR.", "The authors also study the analytic form of critical points of a single-hidden layer ReLU network.", "However, given the form of the necessary and sufficient conditions the usefulness of of these results is less clear.", "Detailed comments:- I think in the title/abstract/intro the use of Neural nets is somewhat misleading as neural nets are typically nonlinear.", "This paper is mostly about linear networks.", "While a result has been stated for single-hidden ReLU networks.", "In my view this particular result is an immediate corollary of the result for linear networks.", "As I explain further below given the combinatorial form of the result, the usefulness of this particular extension to ReLU network is not very clear.", "I would suggest rewording title/abstract/intro", "- Theorem 1 is neat, well done!", "- Page 4 p_i\u2019s in proposition 1", "From my understanding the p_i have been introduced in Theorem 1", "but given their prominent role in this proposition they merit a separate definition (and ideally in terms of the A_i directly).", "- Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5 Are these characterizations computable i.e. given X and Y can one run an algorithm to find all the critical points or at least the parameters used in the characterization p_i, V_i etc?", "- Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5 Would recommend a better exposition why these theorems are useful.", "What insights do you gain by knowing these theorems etc.", "Are less sufficient conditions that is more intuitive or useful.", "(an insightful sufficient condition in some cases is much more valuable than an unintuitive necessary and sufficient one).", "- Page 5 Theorem 2 Does this theorem have any computational implications?", "Does it imply that the global optima can be found efficiently, e.g. are saddles strict with a quantifiable bound?", "- Page 7 proposition 6 seems like an immediate consequence of Theorem 1", "however given the combinatorial nature of the K_{I,J} it is not clear why this theorem is useful.", "e.g . back to my earlier comment w.r.t. Linear networks given Y and X can you find the parameters of this characterization with a computationally efficient algorithm?"], "labels": ["fact", "fact", "fact", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "evaluation", "request", "request", "fact", "evaluation", "request"]}
6
+ {"doc_id": "HkJ6DWtgf", "text": ["This paper studies a new architecture DualAC. ", "The author give strong and convincing justifications based on the Lagrangian dual of the Bellman equation ", "(although not new, introducing this as the justification for the architecture design is plausible).", "There are several drawbacks of the current format of the paper:", "1. The algorithm is vague. ", "Alg 1 line 5: 'closed form': ", "there is no closed form in Eq(14). ", "It is just an MC approximation.", "line 6: Decay O(1/t^\\beta). ", "This is indeed vague albeit easy to understand. ", "The algorithm requires that every step is crystal clear.", "2. Also, there are several format error which may be due to compiling, e.g., line 2 of Abstract,'Dual-AC ' (an extra space). ", "There are many format errors like this throughout the paper. ", "The author is suggested to do a careful format check.", "3. The author is suggested to explain more about the necessity of introducing path regularization and SDA. ", "The current justification is reasonable but too brief.", "4. The experimental part is ok to me, ", "but not very impressive.", "Overall, this seems to be a nice paper to me."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "fact", "fact", "quote", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation"]}
7
+ {"doc_id": "rkW8HjOlz", "text": ["The paper is easy to read for a physicist, ", "but I am not sure how useful it would be for ICLR... ", "it is not clear for me it there is an interest for quantum problems in this conference. ", "This is something I will let to the Area Chair to deceede. ", "Other than this, the paper is interesting, certainly correct, and provides a nice perspective on the future of learning with quantum computers. ", "I like the quantum \"boltzmann machine\" problems. ", "I feel, however, but it might be a bit far from the main interest of the conference.", "Comments:", "* What the authors called \"Free energy-based reinforcement learning\" seems to me just the minimization / maximiation of the free energy. ", "This is simply maximum likelihood applied to the free energy ", "and I think that calling it \"reinforcement learning\" is not only wrong, but also is very confusing, given this is usually reserved to an entirely different learning process.", "* While i liked the introduction of the quantum Boltzmann machine, I would be happy to learn what they can do? ", "Are these useful, for instance, to study correlated fermions/bosons? ", "The paper does not explain why one should be concerns with these devices.", "* The fact that the simulation on a classical computer agrees with the one on a quantum computer is promising, ", "but I would say that this shows that, so far, there is not yet a clear advantage in using a quantum computer. ", "This might change, but in the mean time, what is the benefits for the ICLR community?"], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "non-arg", "fact", "evaluation", "evaluation", "non-arg"]}
8
+ {"doc_id": "BJTS11qlz", "text": ["The paper proposes a method for learning object representations from pixels and then use such representations for doing reinforcement learning. ", "This method is based on convnets that map raw pixels to a mask and feature map. ", "The mask contains information about the presence/absence of objects in different pixel locations and the feature map contains information about object appearance. ", "I believe that the current method can only learn and track simple objects in a constant background, a problem which is well-solved in computer vision. ", "Specifically, a simple method such as \"background subtraction\" can easily infer the mask (the outlying pixels which correspond to moving objects) while simple tracking methods (see a huge literature over decades on computer vision) can allow to track these objects across frames. ", "The authors completely ignore all this previous work ", "and their \"related work\" section starts citing papers from 2016 and onwards! ", "Is it any benefit of learning objects with the current (very expensive) method compared to simple methods such as \"background subtraction\"? ", "Furthermore, the paper is very badly written ", "since it keeps postponing the actual explanations to later sections (while these sections eventually refer to the appendices). ", "This makes reading the paper very hard. ", "For example, during the early sections you keep referring to a loss function which will allow for learning the objects, but you never really give the form of this loss (which you should as soon as you mentioning it) ", "and the reader needs to search into the appendices to find out what is happening. ", "Also, experimental results are very preliminary and not properly analyzed. ", "For example the results in Figure 3 are unclear and need to be discussed in detail in the main text."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request"]}
9
+ {"doc_id": "HJge1dvgz", "text": ["Paper summary: The authors propose a number of tricks to enable training policies for pick and place style tasks using a combination of GAIL-based imitation learning and hand-specified rewards, as well as use of unobserved state information during training and hand-designed curricula.", "The results demonstrate manipulation policies for stacking blocks and moving objects, as well as preliminary results for zero-shot transfer from simulation to a real robot for a picking task and an attempt at a stacking task.", "Review summary: The paper proposes a limited but interesting contribution that will be especially of interest to practitioners,", "but the scope of the contribution is somewhat incremental in light of recent work,", "and the results, while interesting, could certainly be better.", "In the balance, I think the paper should be accepted,", "because it will be of value to practitioners, and I appreciate the detail and real-world experiments.", "However, some of the claims should be revised to better reflect what the paper actually accomplishes:", "the contribution is a bit limited in places,", "but that's *OK* -- the authors should just be up-front about it.", "Pros: - Interesting tasks that combine imitation and reinforcement in a logical (but somewhat heuristic) way", "- Good simulated results on a variety of pick-and-place style problems", "- Some initial attempt at real-world transfer that seems promising, but limited", "- Related work is very detailed and I think many will find it to be a very valuable overview", "Cons:- Some of the claims (detailed below) are a bit excessive in my opinion", "- The paper would be better if it was scoped more narrowly", "- Contribution is a bit incremental and somewhat heuristic", "- The experimental results are difficult to interpret in simulation", "- The real-world experimental results are not great", "- There are a couple of missing citations (but overall related work is great)", "Detailed discussion of potential issues and constructive feedback: > \"Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved.\"", ">> This claim is a bit peculiar.", "Picking up and placing objects is certainly not \"unsolved,\" there are many examples.", "If you want image-based pick and place with demonstrations for example, see Chebotar '17 (not cited).", "If you want stacking blocks, see Nair '17.", "While it's true that there is a particular combination of factors that doesn't exactly appear in prior work, the statement the authors make is way too strong.", "Chebotar '17 shows picking and placing a real-world objective with a much higher success rate than reported here, without simulation.", "Nair '17 shows a much harder stacking task, but without images --", "would that method have worked just as well with image-based distillation?", "Very likely.", "Rajeswaran '17 shows tasks that arguably are much harder.", "Maybe a more honest statement is that this paper proposes some tasks that prior methods don't show, and some prior methods show tasks that the proposed method can't solve.", "But as-is, this statement misrepresents prior work.", "> Previous RL-based robot manipulation policies (Nair et al., 2017; Popov et al., 2017) largely rely on low-level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper.", "This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable.", ">> This is a funny statement.", "Some use images, some don't.", "There is a ton of prior work on RL-based robot manipulation that does use images.", "The current paper does use object state information during training, which some prior works manage to avoid.", "The comments about Cartesian control are a bit peculiar...", "the proposed method controls fingers, but the hand is simple.", "Some prior works have simpler grippers (e.g., Nair) and", "some have much more complex hands (e.g., Rajeswaran).", "So this one falls somewhere in the middle.", "That's fine, but again, this statement overclaims a bit.", "> To sidestep the constraints of training on real hardware we embrace the sim2real paradigm which has recently shown promising results", "(James et al., 2017; Rusu et al., 2016a).", ">> Probably should cite Sadeghi et al. and Tobin et al. in regard to randomization, both of which precede James '17.", "> we can, during training, exploit privileged information about the true system state", ">> This was done also in Pinto et al. and many of the cited GPS papers", "> our policies solve the tasks that the state-of-the-art reinforcement and imitation learning cannot solve", ">> I don't think this statement is justified without much wider comparisons --", "the authors don't attempt any comparisons to prior work, such as Chebotar '17 (which arguably is closest in terms of demonstrated behaviors), Nair '17 (which is also close but doesn't use images, though it likely could).", "> An alternative strategy for dealing with the data demand is to train in simulation and transfer", ">> Aside from previously mentioned citations, should probably cite Devin \"Towards Adapting Deep Visuomotor Representations\"", "> Sec 3.2.1", ">> This method seems a bit heuristic.", "It's logical, but can you say anything about what this will converge to?", "GAIL will try to match the demonstration distribution, and RL will try to maximize expected reward.", "What will this method do?", "> Experiments", ">> Would it be possible to indicate some measure of success rate for the simulated experiments?", "As-is, it's hard to tell how well either the proposed method or the baselines actually work.", "> Transfer", ">> My reading of the transfer experiments is that they are basically unsuccessful.", "Picking up a rectangular object with 80% success rate is not very good.", "The stacking success rate is too low to be useful.", "I do appreciate the authors trying out their method on a real robotic platform,", "but perhaps the more honest assessment of the outcome of these experiments is that the approach didn't work very well,", "and more research is needed.", "Again, it's *OK* to say this!", "Part of the purpose of publishing a paper is to stimulate future research directions.", "I think the transfer experiments should definitely be kept, but the authors should discuss the limitations to help future work address them, and present the transfer appropriately in the intro.", "> Diverse Visuomotor Skills", ">> I think this is a peculiar thing to put in the title.", "Is the implication that prior work is not diverse?", "Arguably several prior papers show substantially more diverse skills.", "It seems that all the skills here are essentially pick and place skills, which is fine (these are interesting skills),", "but the title seems like a peculiar jab at prior work not being \"diverse\" enough, which is simply misleading."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "evaluation", "fact", "fact", "non-arg", "evaluation", "fact", "request", "evaluation", "quote", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "reference", "request", "quote", "fact", "quote", "evaluation", "fact", "quote", "request", "non-arg", "evaluation", "request", "fact", "non-arg", "non-arg", "non-arg", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "non-arg", "request", "non-arg", "evaluation", "non-arg", "fact", "evaluation", "evaluation"]}
10
+ {"doc_id": "BJGc-k9xG", "text": ["This paper provides the analysis of empirical risk landscape for GENERAL deep neural networks (DNNs). ", "Assumptions are comparable to existing results for OVERSIMPLIFED shallow neural networks. ", "The main results analyzed: 1) Correspondence of non-degenerate stationary points between empirical risk and the population counterparts. ", "2) Uniform convergence of the empirical risk to population risk. ", "3) Generalization bound based on stability. ", "The theory is first developed for linear DNNs and then generalized to nonlinear DNNs with sigmoid activations.", "Here are two detailed comments: 1) For deep linear networks with squared loss, Kawaguchi 2016 has shown that the global optima are the only non-degerenate stationary points. ", "Thus, the obtained non-degerenate stationary deep linear network should be equivalent to the linear regression model Y=XW. ", "Should the risk bound only depends on the dimensions of the matrix W?", "2) The comparison with Bartlett & Maass\u2019s (BM) work is a bit unfair, ", "because their result holds for polynomial activations while this paper handles linear activations. ", "Thus, the authors need to refine BM's result for comparison."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "non-arg", "evaluation", "fact", "request"]}
11
+ {"doc_id": "HJmBCpKeG", "text": ["The ms applies an LSTM on ECoG data and studies tranfer between subjects etc.", "The data includes only few samples per class.", "The validation procedure to obtain the model accuray is a bit iffy.", "The ms says: The test data contains 'at least 2 samples per class'.", "Data of the type analysed is highly dependend,", "so it is not unclear whether this validation procedure will not provide overoptimistic results.", "Currently, I do not see evidence for a stable training procedure in the ms.", "I would be curious also to see a comparison to a k-NN classifier using embedded data to gauge the problem difficulty.", "Also, the paper does not really decide whether it is a neuroscience contribution or an ML one.", "If it were a neuroscience contribution, then it would be important to analyse and understand the LSTM representation and to put it into a biological context", "fig 5B is a first step in this direction.", "If it where a ML contribution, then there should be a comprehensive analysis that indeed the proposed architecture using the 2 steps is actually doing the right thing, i.e. that the method converges to the truth if more and more data is available.", "There is also some initial experiments in fig 3A.", "Currently, I find the paper somewhat unsatisfactory and thus preliminary."], "labels": ["fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "request", "fact", "evaluation", "fact", "request", "fact", "evaluation"]}
12
+ {"doc_id": "H1GhmwqgG", "text": ["Summary: - This paper proposes a hand-designed network architecture on a graph of object proposals to perform soft non-maximum suppression to get object count.", "Contribution: - This paper proposes a new object counting module which operates on a graph of object proposals.", "Clarity: - The paper is well written and clarity is good. ", "Figure 2 & 3 helps the readers understand the core algorithm.", "Pros: - De-duplication modules of inter and intra object edges are interesting.", "- The proposed method improves the baseline by 5% on counting questions.", "Cons: - The proposed model is pretty hand-crafted. ", "I would recommend the authors to use something more general, like graph convolutional neural networks (Kipf & Welling, 2017) or graph gated neural networks (Li et al., 2016).", "- One major bottleneck of the model is that the proposals are not jointly finetuned. ", "So if the proposals are missing a single object, this cannot really be counted. ", "In short, if the proposals don\u2019t have 100% recall, then the model is then trained with a biased loss function which asks it to count all the objects even if some are already missing from the proposals. ", "The paper didn\u2019t study what is the recall of the proposals and how sensitive the threshold is.", "- The paper doesn\u2019t study a simple baseline that just does NMS on the proposal domain.", "- The paper doesn\u2019t compare experiment numbers with (Chattopadhyay et al., 2017).", "- The proposed algorithm doesn\u2019t handle symmetry breaking when two edges are equally confident (in 4.2.2 it basically scales down both edges). ", "This is similar to a density map approach and the problem is that the model doesn\u2019t develop a notion of instance.", "- Compared to (Zhou et al., 2017), the proposed model does not improve much on the counting questions.", "- Since the authors have mentioned in the related work, it would also be more convincing if they show experimental results on CL", "Conclusion: - I feel that the motivation is good, but the proposed model is too hand-crafted. ", "Also, key experiments are missing: 1) NMS baseline 2) Comparison with VQA counting work (Chattopadhyay et al., 2017). ", "Therefore I recommend reject.", "References: - Kipf, T.N., Welling, M., Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.", "- Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R. Gated Graph Sequence Neural Networks. ICLR 2016."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "evaluation", "reference", "reference"]}
13
+ {"doc_id": "S1dRXMqxG", "text": ["In this paper authors are summarizing their work on building a framework for automated neural network (NN) construction across multiple tasks simultaneously.", "They present initial results on the performance of their framework called Multitask Neural Model Search (MNMS) controller.", "The idea behind building such a framework is motivated by the successes of recently proposed reinforcement based approaches for finding the best NN architecture across the space of all possible architectures.", "Authors cite the Neural Architecture Search (NAS) framework as an example of such a framework that yields better results compared to NN architectures configured by humans.", "Overall I think that the idea is interesting and the work presented in this paper is very promising.", "Given the depth of the empirical analysis presented the work still feels that it\u2019s in its early stages.", "In its current state and format the major issue with this work is the lack of more in-depth performance analysis which would help the reader draw more solid conclusions about the generalization of the approach.", "Authors use two text classification tasks from the NLP domain to showcase the benefits of their proposed architecture.", "It would be good if they could expand and analyze how well does their framework generalizes across other non-binary tasks, tasks in other domains and different NNs.", "This is especially the case for the transfer learning task.", "In the NAS overview section, readers would benefit more if authors spend more time in outlining the RL detail used in the original NAS framework instead of Figure 1 which looks like a space filler.", "Across the two NLP tasks authors show that MNMS models trained simultaneously give better performance than hand tuned architectures.", "In addition, on the transfer learning evaluation approach they showcase the benefit of using the proposed framework in terms of the initially retrieved architecture and the number of iterations required to obtain the best performing one.", "For better clarity figures 3 and 5 should be made bigger.", "What is LSS in figure 4?"], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "request", "fact", "fact", "request", "request"]}
14
+ {"doc_id": "ryBakJUlz", "text": ["This is a dense, rich, and impressive paper on rapid meta-learning. ", "It is already highly polished, ", "so I have mostly minor comments.", "Related work: I think there is a distinction between continual and life-long learning, ", "and I think that your proposed setup is a form of continual learning (see Ring \u201894/\u201897). ", "Given the proliferation of terminology for very related setups, ", "I\u2019d encourage you to reuse the old term.", "Terminology: I find it confusing which bits are \u201cmeta\u201d and which are not, ", "and the paper could gain clarity by making this consistent. ", "In particular, it would be good to explicitly name the \u201cmeta-loss\u201d (currently the unnamed triple expectation in (3)). ", "By definition, then, the \u201cmeta-gradient\u201d is the gradient of the meta-loss -- and not the one in (2), which is the gradient of the regular loss.", "Notation: there\u2019s redundancy/inconsistency in the reward definition: ", "pick either R_T or \\bold{r}, not both, and maybe include R_T in the task tuple definition? ", "It is also confusing that \\mathcal{R} is a loss, not a reward (and is minimized) ", "-- maybe use another symbol?", "A question about the importance sampling correction: given that this spans multiple (long) trajectories, don\u2019t the correction weights become really small in practice? ", "Do you have some ballpark numbers?", "Typos:- \u201cevent their learning\u201d", "- \u201cin such setting\u201d", "- \u201cexperience to for\u201d"], "labels": ["evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "request", "request", "fact", "evaluation", "request", "evaluation", "request", "non-arg", "non-arg", "fact", "fact", "fact"]}
15
+ {"doc_id": "SkNxPOYlf", "text": ["The analyses of this paper (1) increasing the feature norm of correctly-classified examples induce smaller training loss, (2) increasing the feature norm of mis-classified examples upweight the contribution from hard examples, are interesting. ", "The reciprocal norm loss seems to be reasonable idea to improve the CNN learning based on the analyses. ", "However, the presentation of this paper need to be largely improved. ", "For example, Figure 3 seems to be not relevant to Property2 ", "and may be show the feature norm is lower when the samples is hard example. ", "Therefore, the author used reciprocal norm loss which increases feature norm as shown in Figure 4. ", "However, both Figures are not explained in the main text, ", "and thus hard to understand the relation of Figure 3 and 4. ", "The author should refer all Figures and Tables. ", "Other issues are: -Large-margin Soft max in Figure 2 is not explained in the introduction section. ", "-In Eq.(7), P_j^I is not defined. ", "- In the Property3, The author wrote \u201c where r is lower bound of feature norm\u201d. ", " However, r is not used.", "-In the experimental results, \u201cRN\u201d is not defined.", "-In the Table3, the order of \\lambda should be increasing or decreasing order. ", "- Table 5 is not referred in the main text."], "labels": ["evaluation", "evaluation", "request", "evaluation", "fact", "fact", "fact", "evaluation", "request", "fact", "fact", "quote", "fact", "fact", "request", "fact"]}
16
+ {"doc_id": "rkA0vi8gz", "text": ["This work exploits the causality principle to quantify how the weights of successive layers adapt to each other.", "Some interesting results are obtained, such as \"enforcing more independence between successive layers of generators may lead to better performance and modularity of these architectures\" .", "Generally, the result is interesting and the presentation is easy to follow.", "However, the proposed approach and the experiments are not convincible enough.", "For example, it is hard to obtain the conclusion \"more independence lead to better performance\" from the experimental results.", "Maybe more justifications are needed."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "request"]}
17
+ {"doc_id": "SkmXSLZEM", "text": ["This paper proposes a novel image/album geolocation algorithm.", "This is based on the recent PlaNet approach, with several extensions, including a better mesh representation for discretization and the time-aware prediction.", "However, contributions are limited to me.", "Pros: - Clearly written", "- Good results", "- Interesting empirical analysis between UTC time and geo-location", "Contribution: - Triangulation based meshify vs quad-tree meshification was claimed to be one of the major contributions.", "It is an incremental improvement towards Wayand et al.", "However I don\u2019t think this is significant enough for an ICLR paper.", "- Incorporating the time-ordering into album based geo-localization is considered a contribution on the application side.", "But in computer vision this is not the first one to exploit this information (see [a])", "while the technical approach to encode this information is quite standard (LSTM).", "Improvement:- Compared against Wayand et al. and Vo et al. the quantitative results are not very impressive.", "Minor: Some choices in feature engineering are not explained well:", "e.g. In section 3.2, why you only choose top 10 maximum entries,", "why l1-norm is appended as a feature?", "Are those choice made because of validation performance?", "Minor writing: Fig. 1 \u201c01:00 than\u201d", "Fig. 1 \u201cto make a prediction\u201d", "\u201cFig. 3.1.1\u201d Table 5. Why not bold the best performance?", "Acknowledgements shouldn\u2019t be put in double-blind reviewing process", "[A] Evangelos Kalogerakis, Olga Vesselova, James Hays, Alexei A. Efros, Aaron Hertzmann, \"Image Sequence Geolocation with Human Travel Priors\", Proceedings of the IEEE Internaltional Conference on Computer Vision Recognition (ICCV), 2009."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "reference"]}
18
+ {"doc_id": "S1jAR0Klf", "text": ["The authors present a model for unsupervised NMT which requires no parallel corpora between the two languages of interest.", "While the results are interesting I find very few original ideas in this paper.", "Please find my comments/questions/suggestions below: 1) The authors mention that there are 3 important aspects in which their model differs from a standard NMT architecture.", "All the 3 differences have been adapted from existing works.", "The authors clearly acknowledge and cite the sources.", "Even sharing the encoder using cross lingual embeddings has been explored in the context of multilingual NER", "(please see https://arxiv.org/abs/1607.00198).", "Because of this I find the paper to be a bit lacking on the novelty quotient.", "Even backtranslation has been used successfully in the past (as acknowledged by the authors).", "Unsupervised MT in itself is not a new idea (again clearly acknowledged by the authors).", "2) I am not very convinced about the idea of denoising.", "Specifically, I am not sure if it will work for arbitrary language pairs.", "In fact, I think there is a contradiction even in the way the authors write this.", "On one hand, they want to \"learn the internal structure of the languages involved\"", "and on the other hand they deliberately corrupt this structure by adding noise.", "This seems very counter-intuitive and in fact the results in Table 1 suggest that it leads to a drop in performance.", "I am not very sure that the analogy with autoencoders holds in this case.", "3) Following up on the above question, the authors mention that \"We emphasize, however, that it is not possible to use backtranslation alone without denoising\".", "Again, if denoising itself leads to a drop in the performance as compared to the nearest neighbor baseline then why use backtranslation in conjunction with denoising and not in conjunction with the baseline itself.", "4) This point is more of a clarification and perhaps due to my lack of understanding.", "Backtranslation to generate a pseudo corpus makes sense only after the model has achieved a certain (good) performance.", "Can you please provide details of how long did you train the model (with denoising?) before producing the backtranslations ?", "5) The authors mention that 100K parallel sentences may be insufficient for training a NMT system.", "However, this size may be decent enough for a PBSMT system.", "It would be interesting to see the performance of a PBSMT system trained on 100K parallel sentences.", "6) How did you arrive at the beam size of 12 ?", "Was this a hyperparameter?", "Just curious.", "7) The comparable NMT set up is not very clear.", "Can you please explain it in detail ?", "In the same paragraph, what exactly do you mean by \"the supervised system in this paper is relatively small?\""], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "reference", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "quote", "non-arg", "non-arg", "evaluation", "request", "fact", "evaluation", "request", "request", "request", "request", "evaluation", "request", "request"]}
19
+ {"doc_id": "Skjfeg5gG", "text": ["In the context of multitask reinforcement learning, this paper considers the problem of learning behaviours when given specifications of subtasks and the relationship between them, in the form of a task graph.", "The paper presents a neural task graph solver (NTS), which encodes this as a recursive-reverse-recursive neural network.", "A method for learning this is presented, and fine tuned with an actor-critic method.", "The approach is evaluated in a multitask grid world domain.", "This paper addresses an important issue in scaling up reinforcement learning to large domains with complex interdependencies in subtasks.", "The method is novel,", "and the paper is generally well written.", "I unfortunately have several issues with the paper in its current form, most importantly around the experimental comparisons.", "The paper is severely weakened by not comparing experimentally to other learning (hierarchical) schemes, such as options or HAMs.", "None of the comparisons in the paper feature any learning.", "Ideally, one should see the effect of learning with options (and not primitive actions) to fairly compare against the proposed framework.", "At some level, I question whether the proposed framework is doing any more than just value function propagation at a task level,", "and these experiments would help resolve this.", "Additionally, the example domain makes no sense.", "Rather use something more standard, with well-known baselines, such as the taxi domain.", "I would have liked to see a discussion in the related work comparing the proposed approach to the long history of reasoning with subtasks from the classical planning literature, notably HTNs.", "I found the description of the training of the method to be rather superficial,", "and I don't think it could be replicated from the paper in its current level of detail.", "The approach raises the natural questions of where the tasks and the task graphs come from.", "Some acknowledgement and discussion of this would be useful.", "The legend in the middle of Fig 4 obscures the plot (admittedly not substantially).", "There are also a number of grammatical errors in the paper, including the following non-exhaustive list:", "2: as well as how to do -> as well as how to do it", "Fig 2 caption: through bottom-up -> through a bottom-up", "3: Let S be a set of state -> Let S be a set of states", "3: form of task graph -> form of a task graph", "3: In addtion -> In addition", "4: which is propagates -> which propagates", "5: investigated following -> investigated the following"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "evaluation", "fact", "request", "request", "request", "request", "request", "request", "request"]}
20
+ {"doc_id": "HkQOWPieM", "text": ["This paper considers the following model of a signal x = W^T h + b, where h is an m-dimensional random sparse vector, W is an m by n matrix, b is an n dimensional fixed bias vector. ", "The random vector h follows an iid sparse signal model, ", "each coordinate independently have some probability of being zero, ", "and the remaining probability is distributed among nonzero values according to some reasonable pdf/pmf. ", "The task is to recover h, from the observation x via the activation functions like Sigmoid or ReLU. ", "For example, \\hat{h} = Sigmoid(W^T h + b).", "The authors then show that, under the random sparsity model of h, it is possible to upper bound the probability P(||h-\\hat{h}|| > \\delta. m) in terms of the parameters of the distribution of h and W and b. ", "In some cases noise can also be tolerated. ", "In particular, if W is incoherent (columns being near-orthonormal), then the guarantee is stronger. ", "As far as I understood, the proofs make sense ", "- they basically use Chernoff-bound type argument. ", "It is my impression that a lot of conditions have to be satisfied for the recovery guarantee to be meaningful. ", "I am unsure if real datasets will satisfied so many conditions. ", "Also, the usual objective of autoencoders is to denoise - i.e. recover x, without any access to W. ", "The authors approach in this vein seem to be only empirical. ", "Some recent works on associative memory also assume the sparse recovery model ", "- connections to this literature would have been of interest. ", "It is also not clear why compressed sensing-type recovery using a single ReLU or Sigmoid would be of interest: ", "are their complexity benefits?"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request"]}