{"doc_id": "ByFZUzFlf", "text": ["Active learning for deep learning is an interesting topic ", "and there is few useful tool available in the literature. ", "It is happy to see such paper in the field. ", "This paper proposes a batch mode active learning algorithm for CNN as a core-set problem. ", "The authors provide an upper bound of the core-set loss, which is the gap between the training loss on the whole set and the core-set. ", "By minimizing this upper bound, the problem becomes a K-center problem which can be solved by using a greedy approximation method, 2-OPT. ", "The experiments are performed on image classification problem (CIFAR, CALTECH, SVHN datasets), under either supervised setting or weakly-supervised setting. ", "Results show that the proposed method outperforms the random sampling and uncertainty sampling by a large margin. ", "Moreover, the authors show that 2-OPT can save tractable amount of time in practice with a small accuracy drop.", "The proposed algorithm is new ", "and writing is clear. ", "However, the paper is not flawless. ", "The proposed active learning framework is under ERM and cover-set, which are currently not supported by deep learning. ", "To validate such theoretical result, a non-deep-learning model should be adopted. ", "The ERM for active learning has been investigated in the literature, such as \"Querying discriminative and representative samples for batch mode active learning\" in KDD 2013, which also provided an upper bound loss of the batch mode active learning ", "and seems applicable for the problem in this paper. ", "Another interesting question is most of the competing algorithm is myoptic active learning algorithms. ", "The comparison is not fair enough. ", "The authors should provide more competing algorithms in batch mode active learning."], "labels": ["evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "evaluation", "request"]} {"doc_id": "rJwXkeOgM", "text": ["The paper considers the problem of training neural networks in mixed precision (MP), using both 16-bit floating point (FP16) and 32-bit floating point (FP32). ", "The paper proposes three techniques for training networks in mixed precision: first, keep a master copy of network parameters in FP32; second, use loss scaling to ensure that gradients are representable using the limited range of FP16; third, compute dot products and reductions with FP32 accumulation. ", "Using these techniques allows the authors to match the results of traditional FP32 training on a wide variety of tasks without modifying any training hyperparameters. ", "The authors show results on ImageNet classification (with AlexNet, VGG, GoogLeNet, Inception-v1, Inception-v3, and ResNet-50), VOC object detection (with Faster R-CNN and Multibox SSD), speech recognition in English and Mandarin (with CNN+GRU), English to French machine translation (with multilayer LSTMs), language modeling on the 1 Billion Words dataset (with a bigLSTM), and generative adversarial networks on CelebFaces (with DCGAN).", "Pros: - Three simple techniques to use for mixed-precision training", "- Matches performance of traditional FP32 training without modifying any hyperparameters", "- Very extensive experiments on a wide variety of tasks", "Cons: - Experiments do not validate the necessity of FP32 accumulation", "- No comparison of training time speedup from mixed precision", "With new hardware (such as NVIDIA\u2019s Volta architecture) providing large computational speedups for MP computation, I expect that MP training will become standard practice in deep learning in the near future. ", "Naively porting FP32 training recipes can fail due to the reduced numeric range of FP16 arithmetic; ", "however by adopting the techniques of this paper, practitioners will be able to migrate their existing FP32 training pipelines to MP without modifying any hyperparameters. ", "I expect these techniques to be hugely impactful as more people begin migrating to new MP hardware.", "The experiments in this paper are very exhaustive, covering nearly every major application of deep learning. ", "Matching state-of-the-art results on so many tasks increases my confidence that I will be able to apply these techniques to my own tasks and architectures to achieve stable MP training.", "My first concern with the paper is that there are no experiments to demonstrate the necessity of FP32 accumulation. ", "With an FP32 master copy of the weights and loss scaling, can all arithmetic be performed solely in FP16, or are there some tasks where training will still diverge?", "My second concern is that there is no comparison of training-time speedup using MP. ", "The main reason that MP is interesting is because new hardware promises to accelerate it. ", "If people are willing to endure the extra engineering overhead of implementing the techniques from this paper, what kind of practical speedups can they expect to see from their workloads? ", "NVIDIA\u2019s marketing material claims that the Tensor Cores in the V100 offer an 8x speedup over its general-purpose CUDA cores ", "(https://www.nvidia.com/en-us/data-center/tesla-v100/). ", "Since in this paper some operations are performed in FP32 (weight updates, batch normalization) and other operations are bound by memory and not compute bandwidth, ", "what kinds of speedups do you see in practice when moving from FP32 to MP on V100?", "My other concerns are minor. ", "Mandarin speech recognition results are reported on \u201cour internal test set\u201d. ", "Is there any previously published work on this dataset, or any publicly available test set for this task?", "The notation around the Inception architectures should be clarified. ", "According to [3] and [4], \u201cInception-v1\u201d and \u201cGoogLeNet\u201d both refer to the architecture used in [1]. ", "The architecture used in [2] is referred to as \u201cBN-Inception\u201d by [3] and \u201cInception-v2\u201d by [4]. ", "\u201cInception-v3\u201d is the architecture from [3], which is not currently cited. ", "To improve clarity in Table 1, I suggest renaming \u201cGoogLeNet\u201d to \u201cInception-v1\u201d, changing \u201cInception-v1\u201d to \u201cInception-v2\u201d, and adding explicit citations to all rows of the table.", "In Section 4.3 the authors note that \u201chalf-precision storage format may act as a regularizer during training\u201d. ", "Though the effect is most obvious from the speech recognition experiments in Section 4.3, ", "MP also achieves slightly higher performance than baseline for all ImageNet models but Inception-v1 and for both object detection models; ", "these results add support to the idea of FP16 as a regularizer.", "Minor typos: Section 3.3, Paragraph 3: \u201ceither FP16 or FP16 math\u201d -> \u201ceither FP16 or FP32 math\u201d", "Section 4.1, Paragraph 4: \u201c pre-ativation\u201d -> \u201cpre-activation\u201d", "Overall this is a strong paper, ", "and I believe that it will be impactful as MP hardware becomes more widely used.", "References [1] Szegedy et al, \u201cGoing Deeper with Convolutions\u201d, CVPR 2015", "[2] Ioffe and Szegedy, \u201cBatch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift\u201d, ICML 2015", "[3] Szegedy et al, \u201cRethinking the Inception Architecture for Computer Vision\u201d, CVPR 2016", "[4] Szegedy et al, \u201cInception-v4, Inception-ResNet and the Impact of Residual Connections on Learning\u201d, ICLR 2016 Workshop"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "request", "fact", "reference", "fact", "request", "evaluation", "fact", "request", "request", "fact", "fact", "fact", "request", "fact", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "reference", "reference", "reference", "reference"]} {"doc_id": "SysTGDdxf", "text": ["This paper is based on the theory of group equivariant CNNs (G-CNNs), proposed by Cohen and Welling ICML'16.", "Regular convolutions are translation-equivariant, meaning that if an image is translated, its convolution by any filter is also translated. ", "They are however not rotation-invariant for example. ", "G-CNN introduces G-convolutions, which are equivariant to a given transformation group G.", "This paper proposes an efficient implementation of G-convolutions for 6-fold rotations (rotations of multiple of 60 degrees), using a hexagonal lattice. ", "The approach is evaluated on CIFAR-10 and AID, a dataset of aerial scene classification. ", "The approach outperforms G-convolutions implemented on a squared lattice, which allows only 4-fold rotations on AID by a short margin. ", "On CIFAR-10, the difference does not seem significative (according to Tables 1 and 2).", "I guess this can be explained by the fact that rotation equivariance makes sense for aerial images, where the scene is mostly fronto-parallel, but less for CIFAR (especially in the upper layers), which exhibits 3D objects.", "I like the general approach of explicitly putting desired equivariance in the convolutional networks. ", "Using a hexagonal lattice is elegant, even if it is not new in computer vision ", "(as written in the paper). ", "However, as the transformation group is limited to rotations, ", "this is interesting in practice mostly for fronto-parallel scenes, as the experiences seem to show. ", "It is not clear how the method can be extended to other groups than 2D rotations.", "Moreover, I feel like the paper sometimes tries to mask the fact that the proposed method is limited to rotations. ", "It is admittedly clearly stated in the abstract and introduction, but much less in the rest of the paper.", "The second paragraph of Section 5.1 is difficult to keep in a paper. ", "It says that \"From a qualitative inspection of these hexagonal interpolations we conclude that no information is lost during the sampling procedure.\" ", "\"No information is lost\" is a strong statement from a qualitative inspection, especially of a hexagonal image. ", "This statement should probably be removed. ", "One way to evaluate the information lost could be to iterate interpolation between hexagonal and squared lattices to see if the image starts degrading at some point."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "evaluation", "request", "request"]} {"doc_id": "Bk15lpF1G", "text": ["This paper studies the adjustment of dropout rates ", "which is a useful tool to prevent the overfitting of deep neural networks. ", "The authors derive a generalization error bound in terms of dropout rates. ", "Based on this, the authors propose a regularization framework to adaptively select dropout rates. ", "Experimental results are also given to verify the theory.", "Major comments: (1) The Empirical Rademacher complexity is not defined. ", "For completeness, it would be better to define it at least in the appendix.", "(2) I can not follow the inequality (5). ", "Especially, according to the main text, f^L is a vector-valued function . ", "Therefore, it is not clear to me the meaning of \\sum\\sigma_if^L(x_i,w) in (5).", "(3) I can also not see clearly the third equality in (9). ", "Note that f^l is a vector-valued function. ", "It is not clear to me how it is related to a summation over j there.", "(4) There is a linear dependency on the number of classes in Theorem 3.1. ", "Is it possible to further improve this dependency?", "Minor comments:(1) Section 4: 1e-3,1e-4,1e-5 is not consistent with 1e^{-3}, 1e^{-4},1e^{-5}", "(2) Abstract: there should be a space before \"Experiments\".", "(3) It would be better to give more details (e.g., page, section) in citing a book in the proof of Theorem 3.1", "Summary: The mathematical analysis in the present version is not rigorous. ", "The authors should improve the mathematical analysis."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "non-arg", "fact", "request", "request", "evaluation", "request"]} {"doc_id": "By-4qhFeM", "text": ["This paper proposes another entropic regularization term for deep neural nets.", "The key idea can be stated as follows: Let X denote the observed input, C the hidden class label taking values in a finite set, and Y the representation computed by a neural net.", "Then C -> X -> Y is a Markov chain.", "Moreover, assuming that the mapping X -> Y is deterministic (as is the case with neural nets or any other deterministic representations), we can write down the mutual information between X and Y as", "I(X;Y) = H(Y) - H(Y|X) = H(Y).", "A simple manipulation shows that H(Y) = I(C;Y) + H(Y|C).", "The authors interpret the first term, I(C;Y), as a data fit term that quantifies the statistical correlations between the class label C and the representation Y,", "whereas the second term, H(Y|C), is the amount by which the representation Y can be compressed knowing the class label C.", "The authors then propose to 'explicitly decouple' the data-fit term I(C;Y) from the regularization penalty and focus on minimizing H(Y|C).", "In fact, they replace this term by the sum of conditional entropies of the form H(Y_{i,k}|C), where Y_{i,k} is the activation of the ith neuron in the kth layer of the neural net.", "The final step is to recognize that the conditional entropy may not admit a scalable and differentiable estimator,", "so they use the relation between a quantity called entropy power and second moments to replace the entropic penalty with the conditional variance penalty Var[Y_{i,k}|C].", "Since the class-conditional distributions are unknown,", "a surrogate model Q_{Y|C} is used.", "The authors present some experimental results as well.", "However, this approach has a number of serious flaws.", "First of all, if the distribution of X is nonatomic and the mapping X -> Y is continuous (in the case of neural nets, it is even Lipschitz), then the mutual information I(X;Y) is infinite.", "In that case, the representation of I(X;Y) in terms of entropies is not valid", "-- indeed, one can write the mutual information between two jointly distributed random variables X and Y in terms of differential entropies as I(X;Y) = h(Y) - h(Y|X),", "but this is possible only if both terms on the right-hand side exist.", "This is not the case here,", "so, in particular, one cannot relate I(X;Y) to I(C;Y).", "Ironically, I(C;Y) is finite,", "because C takes values in a finite set,", "so I(C;Y) is at most the log cardinality of the set of labels.", "One can start, then, simply with I(C;Y) and express it as H(C) - H(C|Y).", "Both terms are well-defined Shannon entropies, where the first one does not depend on the representation,", "whereas the second one involves the representation.", "But then, if the goal is to _minimize_ the mutual information between I(C;Y), it makes sense to _maximize_ the conditional entropy H(C|Y).", "In short, the line of reasoning that leads to minimizing H(Y|C) is not convincing.", "Moreover, why is it a good idea to _minimize_ I(C;Y) in the first place?", "Shouldn't one aim to maximize it subject to structural constraints on the representation, along the lines of InfoMax?", "The next issue is the chain of reasoning that leads to replacing H(Y|C) with Var[Y|C].", "One could start with that instead without changing the essence of the approach, but then the magic words \"Shannon decay\" would have to disappear altogether, and the proposed method would lose all of its appeal."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]} {"doc_id": "SyU_UK2lf", "text": ["This paper is about modifications to the skip-thought framework for learning sentence embeddings. ", "The results show performance comparable to or better than skip-thought while decreasing training time. ", "I think the overall approach makes sense: ", "use an RNN encoder ", "because we know it works well, ", "but improve training efficiency by changing the decoder to a combination of feed-forward and convolutional layers. ", "I think it may be the case that this works well because the decoder is not auto-regressive but merely predicts each word independently. ", "This is possible because the decoder will not be used after training. ", "So all the words can be predicted all at once with a fixed maximum sentence length. ", "In typical encoder-decoder applications, the decoder is used at test time to get predictions, ", "so it is natural to make it auto-regressive. ", "But in this case, the decoder is thrown away after training, ", "so it makes more sense to make the decoder non-auto-regressive. ", "I think this point should be made in the paper. ", "Also, I think it's worth noting that an RNN decoder could be used in a non-auto-regressive architecture as well. ", "That is, the sentence encoding could be mapped to a sequence of length 30 as is done with the CNN decoder currently; ", "then a (multi-layer) BiLSTM could be run over that sequence, ", "and then a softmax classifier can be attached to each hidden vector to predict the word at that position. ", "It would be interesting to compare that BiLSTM decoder with the proposed CNN decoder and also to compare it to a skip-thought-style auto-regressive RNN decoder. ", "This would let us understand whether the benefit is coming more from the non-auto-regressive nature of the decoder or from the CNN vs RNN differences. ", "That is, it would make sense to factor the decision of decoder design along multiple axes. ", "One axis could be auto-regressive vs predict-all-words. ", "Another axis could be using a CNN over the sequence of word positions or an RNN over the sequence of word positions. ", "For auto-regressive models, another axis could be train using previous ground-truth word vs train using previous predicted word. ", "Skip-thought corresponds to an auto-regressive RNN (using the previous ground-truth word IIRC). ", "The proposed decoder is a predict-all-words CNN. ", "It would be natural to also experiment with an auto-regressive CNN and a predict-all-words RNN (like what I described in the paragraph above). ", "The paper is choosing a single point in the space and referring to it as a \"CNN decoder\" ", "whereas there are many possible architectures that can be described this way ", "and I think it would strengthen the paper to increase the precision in discussing the architecture and possible alternatives. ", "Overall, I think the architectural choices and results are strong enough to merit publication. ", "Adding any of the above empirical comparisons would further strengthen the paper. ", "However, I did have quibbles with some of the exposition and some of the claims made throughout the paper. ", "They are detailed below:Sec. 2:In the \"Decoder\" paragraph: please add more details about how the words are predicted. ", "Are there final softmax layers that provide distributions over output words? ", "I couldn't find this detail in the paper. ", "What loss is minimized during training? ", "Is it the sum of log losses over all words being predicted?", "Sec. 3:Section 3 does not add much to the paper. ", "The motivations there are mostly suggestive rather than evidence-based. ", "Section 3 could be condensed by about 80% or so without losing much information. ", "Overall, the paper has more than 10 pages of content, ", "and the use of 2 extra pages beyond the desired submission length of 8 should be better justified. ", "I would recommend adding a few more details to Section 2 and removing most of Section 3. ", "I'll mention below some problematic passages in Section 3 that should be removed.", "Sec. 3.2:\"...this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process.\" ", "What is the justification or evidence for this claim? ", "I think the claim should be supported by an argument or some evidence or else should be removed. ", "If the authors intend the subsequent paragraphs to justify the claim, then see my next comments. ", "Sec. 3.2:\"The existence of the ground-truth current word embedding potentially decreases the tendency for the decoder to exploit other information from the sentence representation.\"", "But this is not necessarily an inherent limitation of RNN decoders ", "since it could be addressed by using the embedding of the previously-predicted word rather than the ground-truth word. ", "This is a standard technique in sequence-to-sequence learning; cf. scheduled sampling (Bengio et al., 2015). ", "Sec. 3.2: \"Although the word order information is implicitly encoded in the CNN decoder, it is not emphasized as it is explicitly in the RNN decoder. The CNN decoder cares about the quality of generated sequences globally instead of the quality of the next generated word. Relaxing the emphasis on the next word, may help the CNN decoder model to explore the contribution of context in a larger space.\"", "Again, I don't see any evidence or justification for these arguments. ", "Also see my discussion above about decoder variations; ", "these are not properties of RNNs vs CNNs but rather properties of auto-regressive vs predict-all-words decoders. ", "Sec. 5.2-5.3:There are a few high-level decisions being tuned on the test sets for some of the tasks, e.g., the length of target sequences in Section 5.2 and the number of layers and channel size in Section 5.3. ", "Sec. 5.4:When trying to explain why an RNN encoder works better than a CNN encoder, the paper includes the following: ", "\"We stated above that, in our belief, explicit usage of the word order information will augment the transferability of the encoder, and constrain the search space of the parameters in the encoder. The results match our belief.\"", "I don't think these beliefs are concrete enough to be upheld or contradicted. ", "Both encoders explicitly use word order information. ", "Can you provide some formal or theoretical statement about how the two encoders treat word order differently? ", "I fear that it's only impressions and suppositions that lead to this difference, rather than necessarily something formal. ", "Sec. 5.4:In Table 1, it is unclear why the \"future predictor\" model is the one selected to be reported from Gan et al (2017). ", "Gan et al has many settings and the \"future predictor\" setting is the worst. ", "An explanation is needed for this choice. ", "Sec. 6.1: In the \"BYTE m-LSTM\" paragraph:\"Our large RNN-CNN model trained on Amazon Book Review (the largest subset of Amazon Review) performs on par with BYTE m-LSTM model, and ours works better than theirs on semantic relatedness and entailment tasks.\" ", "I'm not sure this \"on par\" assessment is warranted by the results in Table 2. ", "BYTE m-LSTM is better on MR by 1.6 points and better on CR by 4.7 points. ", "The authors' method is better on SUBJ by 0.7 and better on MPQA by 0.5. ", "So on sentiment tasks, BYTE m-LSTM is clearly better, ", "and on the other tasks the RNN-CNN is typically better, especially on SICK-r. ", "More minor things are below:Sec. 1:The paper contains this: \"The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al. (2013b)\"", "I don't think this is accurate. ", "When restricting attention to neural network methods, it would be more correct to give credit to Collobert et al. (2011). ", "But moving beyond neural methods, there were decades of previous work in using context information (counts of context words) to produce vector representations of words. ", "typo: \"which d reduces\" --> \"which reduces\"", "Sec. 2:The notation in the text doesn't match that in Figure 1: w_i^1 vs. w_1 and h_i^1 vs h_1. ", "Instead of writing \"non-parametric composition function\", describe it as \"parameter-free\". ", "\"Non-parametric\" means that the number of parameters grows with the data, not that there are no parameters. ", "In the \"Representation\" paragraph: how do you compute a max over vectors? ", "Is it a separate max for each dimension? ", "This is not clear from the notation used.", "Sec. 3.1:inappropriate word choice: the use of \"great\" in \"a great and efficient encoding model\"", "Sec. 3.2:inappropriate word choice: the use of \"unveiled\" in \"is still to be unveiled\"", "Sec. 3.4:Tying input and output embeddings can be justified with a single sentence and the relevant citations (which are present here). ", "There is no need for speculation about what may be going on, e.g.: \"the model learns to explore the non-linear compositionality of the input words and the uncertain contribution of the target words in the same space\".", "Sec. 4:I think STS14 should be defined and cited where the other tasks are described. ", "Sec. 5.3:typo in Figure 2 caption: \"and and\"", "Sec. 6.1: In the \"Skip-thought\" paragraph:inappropriate word choice: \"kindly\"", "The description that says \"we cut off a branch for decoding\" is not clear to me. ", "What is a \"branch for decoding\" in this context? ", "Please modify it to make it more clear. ", "References:Bengio S, Vinyals O, Jaitly N, Shazeer N. Scheduled sampling for sequence prediction with recurrent neural networks. NIPS 2015.", "Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. Journal of Machine Learning Research 2011."], "labels": ["fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "fact", "fact", "fact", "fact", "request", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request", "request", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "non-arg", "quote", "fact", "request", "non-arg", "quote", "evaluation", "fact", "evaluation", "quote", "fact", "non-arg", "fact", "fact", "fact", "quote", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "request", "quote", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "fact", "request", "fact", "request", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "request", "request", "reference", "reference"]} {"doc_id": "rJ96Jgclf", "text": ["I quite liked the revival of the dual memory system ideas and the cognitive (neuro) science inspiration. ", "The paper is overall well written and tackles serious modern datasets, which was impressive, ", "even though it relies on a pre-trained, fixed ResNet (see point below).", "My only complaint is that I felt I couldn\u2019t understand why the model worked so well. ", "A better motivation for some of the modelling decisions would be helpful. ", "For instance, how much the existence (and training) of a BLA network really help ", "\u2014 which is a central new part of the paper, and wasn\u2019t in my view well motivated. ", "It would be nice to compare with a simpler baseline, such as a HC classifier network with reject option. ", "I also don\u2019t really understand why the proposed pseudorehearsal works so well. ", "Some formal reasoning, even if approximate, would be appreciated.", "Some additional comments below: - Although the paper is in general well written, ", "it falls on the lengthy side ", "and I found it difficult at first to understand the flow of the algorithm. ", "I think it would be helpful to have a high-level pseudocode presentation of the main steps.", "- It was somewhat buried in the details that the model actually starts with a fixed, advanced feature pre-processing stage (the ResNet, trained on a distinct dataset, as it should). ", "I\u2019m fine with that, ", "but this should be discussed. ", "Note that there is evidence that the neuronal responses in areas as early as V1 change as monkeys learn to solve discrimination tasks. ", "It should be stressed that the model does not yet model end-to-end learning in the incremental setting.", "- p. 4, Eq. 4, is it really necessary to add a loss for the intermediate layers, and not only for the input layer? ", "I think it would be clearer to define the \\mathcal{L} explictily somewhere. ", "Also, shouldn\u2019t the sum start at j=0?"], "labels": ["evaluation", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "fact", "request", "request", "request", "request"]} {"doc_id": "SkFdaJtgf", "text": ["The authors present an estimator for the mutual information (MI) based on the Donsker-Varadhan representation for the KL divergence and its generalization to arbitrary f-divergences by Ruderman et al. ", "While that last work introduced an estimator based on optimization over the unit ball in an RKHS, ", "the current work propose to use a parametric function class given by a neural network ", "(I'd suggest that the authors make this point more explicit, ", "as currently it's not totally clear what their actual contribution is and how their work compares to the prior art they cite). ", "The authors show that such an estimator can be used to train models with less mode-dropping in adversarial models. ", "The work is quite straightforward, ", "but improves over similar work in the GAN space by Nowozin et al. by using Ruderman's tighter variational representation instead of Nguyen's one.", "The paper contains many typos and grammatical errors ", "and the authors should do an exhaustive proof-reading. ", "More problematic is that, right after eq. 10, the authors mention \"We show in the Appendix that OMIE has the desirable strong consistency and convergence properties\". ", "However, the appendix doesn't contain such a proof. ", "Is it missing from the submitted version? ", "I don't think that such a consistency proof is strictly necessary for a paper like this, ", "but for the review to be accurate I need to see the proof. ", "Since I can't find it, ", "I assume it does not exist. ", "In that case, the authors should give less emphasis to the MI estimator itself and more to the empirical properties and applications.", "The authors present some experiments comparing different estimators of MI applied to synthetic data. ", "Figure 1 is hard to read, ", "I suggest the authors try to come up with a more legible plot. ", "Figure 2 is also a bit surprising, ", "why show error for 50 dimensions but estimates for 2 dimensions? ", "Since these experiments are quick to run, ", "it would be helpful to get more information on how the gap between the methods change as the dimensionality increases (e.g. a surface plot with d and # of iterations on the x and y axes). ", "Also it would be highly beneficial to compare with the method in Ruderman at al., ", "so that people interested in MI estimation but who don't plan on using the estimator as part of a neural net architecture can get some idea on how the inductive bias of NNs compare to RKHS.", "In the caption to Fig. 3 the authors state \"The OMIEGAN generator learns a distribution with a high amount of structured noise\", which I find hard to understand. ", "Probably the authors can be a bit more precise than saying \"structured noise\".", "I would recommend dropping the Information Bottleneck section to focus on showing more convincingly the impact of OMIE in GANs. ", "The experiments section currently looks rushed and lacking in depth.", "In summary, this work provides value by introducing a (previously known) superior f-divergence variational representation to the GAN community. ", "The mode-collapse prevention via MI maximisation is also interesting and deserves more experimental attention to make the paper stronger."], "labels": ["fact", "fact", "fact", "request", "evaluation", "fact", "evaluation", "fact", "fact", "request", "evaluation", "fact", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "evaluation", "fact", "evaluation"]} {"doc_id": "S1uz175xf", "text": ["The authors study the problem of distributed routing in a network, where the goal is to minimize the maximal load (i.e. the load of the link with the highest utilization). ", "The authors advocate to use multi-agent reinforcement learning. ", "The main idea put forward by the authors is that by designing artificial rewards (to guide the agents), one can achieve faster exploration, in order to reduce convergence time.", "While the authors put forward several interesting ideas, ", "there are some shortcomings to the present version of the paper, including:", "- The design objective seems flawed from the networking point of view: ", "while minimizing the maximal load of a link is certainly a good starting point (to avoid instable queues) ", "one typically wants to minimize delay (or maximize flow throughput). ", "Indeed, it is possible to have a larger maximal load while reducing delay in many cases.", "- Furthermore, the authors do not provide a baseline to which the outcome of the learning algorithms they propose: ", "for instance how does their approach compare to simple policies (those are commonplace in networking) such as MaxWeight, Backpressure and so on ?", "- The authors argue that using multi-agent learning is more desirable than single agent (i.e. with a single reward signal which is common to all agents). ", "However, is multi-agent guaranteed to converge in such a setting ? ", "If some versions of the problem (for some particular reward signal) are not guaranteed to converge, it is difficult to understand whether \"convergence\" is slow due to an inefficient exploration, or simply because convergence cannot occur in the first place.", "- The learning algorithms used are not clearly explained: ", "the authors simply state that they use \"ACCNet\" (from some unpublished prior work), ", "but to readers unfamiliar with this algorithm, it is difficult to judge the contents of the paper. ", "- In the numerical experiments, what is the \"convergence rate\" ? ", "is it the ratio between the mean reward of the learnt policy and that of the optimal ? ", "For how many time steps are the learning algorithm run before evaluating their outcome ? ", "What are the meaning of the various input parameter of ACCnet, ", "and is the performance sensitive to those parameters ?"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "request", "request", "request"]} {"doc_id": "rJW-33tlG", "text": ["This paper present the application of the memory buffer concept to speech synthesis, ", "and additionally learns a \"speaker vector\" that makes the system adaptive and work reasonably well on \"in-the-wild\" speech data. ", "This is a relevant problem, and a novel solution, ", "but synthesis is a wicked problem to evaluate, ", "so I am not sure if ICLR is the best venue for this paper. ", "I see two competing goals:", "- If the focus is on showing that the presented approach outperforms other approaches under given conditions, a different task would be better (for example recognition, or some sort of trajectory reconstruction)", "- If the focus is on showing that the system outperforms other synthesis systems, then a speech oriented venue might be best ", "(and it is unfortunate that optimized hyper-parameters for the other systems are not available for a fair comparsion)", "- If fair comparisons with the other appraoches cannot be made, my sense is that the multi-speaker (post-training fitting) option is really the most interesting and novel contribution here, ", "which could be discussed in mroe detail", "Still, the approach is creative and interesting and deserves to be presented. ", "I have a few questions/ suggestions:Introduction - The link to Baddeley's \"phonological loop\" concept seems weak at best. ", "There is nothing phonological about the features that this model stores and retrieves, ", "and no evidence that the model behaves in a way consistent with \"phonologcial\" (or articulatory) assumptions or models ", "- maybe best to avoid distracting the reader with this concept and strengthen the speaker adaptation aspect?", "- The memory model is not an RNN, ", "but it is a recurrently called structure (as the name \"phonological loop\" also implies) ", "- so I would also not highlight this point much", "- Why would the four properties of the proposed method (mid of p. 2, end of introduction: memory buffer, shared memory, shallow fully connected networks, and simple reader mechanism) lead to better robustness and improve performance on noisy and limited training data? ", "Maybe the proposed approach works better for any speech synthesis task? ", "Why specifically for \"in-the-wild\" data? ", "The results in Table 2 show that the proposed system outperforms other systems on Blizzard 2013, but not Blizzard 2011 ", "- does this support the previous argument?", "- Why not also evaluate MCD scores? ", "This should be a quick and automatic way to diagnose what the system is doing? ", "Or is this not meaningful with the noisy training data?", "Previous work- Please introduce abbreviations the first time they are used (\"CBHG\" for example)", "- There is other work on using \"in-the-wild\" speech as well: ", "Pallavi Baljekar and Alan W Black. Utterance Selection Techniques for TTS Systems using Found Speech, SSW 2016, Sunnyvale, USA Sept 2016", "The architecture- Please explain the \"GMM\" (Gaussian Mixture Model?) attention mechanism in a bit more detail, how does back-propagation work in this case?", "- Why was this approach chosen? ", "Does it promise to be robust or good for low data situations specifically?", "- The fonts in Figure 2 are very small, ", "please make them bigger, ", "and the Figure may not print well in b/w. ", "Why does the mean of the absolute weights go up for high buffer positions? ", "Is there some \"leaking\" from even longer contexts?", "- I don't understand \"However, human speech is not deterministic and one cannot expect [...] truth\". ", "You are saying that the model cannot be excepted to reproduce the input exactly? ", "Or does this apply only to the temporal distribution of the sequence (but not the spectral characteristics)? ", "The previous sentence implies that it does. ", "And how does teacher-forcing help in this case?", "- what type of speed is \"x5\"? ", "Five times slower or faster than real-time?", "Experiments- Table 2: maybe mention how these results were computed, i.e. which systems use optimized hyper parameters, and which don't? ", "How do these results support the interpretation of hte results in the introruction re in-the-wild data and found data?", "- I am not sure how to read Figure 4. ", "Maybe it would be easier to plot the different phone sequences against each other and show how the timings are off, i.e. plot the time of the center of panel one vs the time of the center of panel 2 for the corresponding phone, and show how this is different from a straight line. ", "Or maybe plot phones as rectangles that get deformed from square shape as durations get learned?", "- Figure 5: maybe provide spectrograms and add pitch contours to better show the effect of the dfifferent intonations? ", "- Figure 4 uses a lot of space, could be reduced, if needed", "Discussion- I think the first claim is a bit to broad ", "- nowhere is it shown that the method is inherently more robust to clapping and laughs, and variable prosody. ", "The authors will know the relevant data-sets better than I do, ", "maybe they can simply extend the discussion to show that this is what happens. ", "- Efficiency: I think Wavenet has also gotten much faster and runs in less than real-time now ", "- can you expand that discussion a bit, or maybe give estimates in times of FLOPS required, rather than anecdotal evidence for systems that may or may not be comparable?", "Conclusion- Now the advantage of the proposed model is with the number of parameters, rather than the computation required. ", "Can you clarify? ", "Are your models smaller than competing models?"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "request", "request", "evaluation", "evaluation", "fact", "fact", "request", "fact", "fact", "request", "request", "request", "request", "fact", "fact", "request", "evaluation", "request", "request", "fact", "reference", "request", "request", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "request", "request", "request", "request", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "fact", "request", "request"]} {"doc_id": "SkxVw6t7z", "text": ["This paper presents a new method for detecting hypernymy by extending the distributional inclusion hypothesis to low-rank embeddings.", "The first half of the paper is written superbly, providing a sober account of the state-of-the-art in hypernymy detection via distributional semantics, and a clear, well-motivated explanation of the main algorithm (DIVE).", "The reformulation of PMI is interesting;", "the authors essentially replace P(w) with the uniform distribution", "(this should be stated explicitly in the paper, BTW).", "They then augment SGNS to reflect this change by dynamically selecting the number of negatively-sampled contexts according to the target word (w).", "This is a clever trick.", "I was also very happy to see that the authors evaluated on virtually every lexical inference dataset available.", "However, the remainder of the paper describing the experiments and their results was extremely difficult to parse.", "First of all, the line between the authors' contribution and prior art is blurry,", "because they seem to be introducing new metrics for measuring hypernymy as part of the experiments.", "There are also too many moving parts: datasets, evaluation metrics, detection metrics, embedding methods, hyperparameters, etc.", "These need to be organized, controlled for, and clearly explained.", "I have two major concerns regarding the experiments, beyond intelligibility.", "First, the authors used a specific corpus (with specific preprocessing) to train their vectors, but compared their results to those reported in other papers, which are not based on the same data.", "This invalidates these comparisons.", "Instead, the other methods should be replicated and rerun on exactly the same corpus.", "I would also recommend using a much larger corpus,", "since 50M tokens is considered quite small when training word embeddings.", "My second concern is that summation (dS) is doing all the heavy lifting.", "In Table 2, we can see that the difference is only 0.1 between dS and W * dS (where W is also not trained using DIVE).", "Since dS is basically a proxy for difference in word frequency,", "could it be that the proposed method is just computing which word is more general?", "This looks awfully familiar to Levy et al's prototypical hypernym result.", "Miscellaneous Comments: - There's a very recent paper on using vector norms for detecting hypernymy that might be worth contrasting with:", "https://arxiv.org/pdf/1710.06371.pdf", "- Micro-averaging these datasets is problematic,", "because some of the datasets based on WordNet are much larger than the hand-annotated ones, and will likely drown them out.", "Because these datasets are so different,", "I think it is critical to look at the details and not only at the averages.", "- PMI filtering needs to be controlled/ablated in the experiments.", "- While explaining equation 6, the authors say that the gradients for x and y are similar;", "this is not true,", "because k is a function of x (or y), and if one appears more often in general (not necessarily with c), then the gradients will be different as well."], "labels": ["fact", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "evaluation", "request", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "reference", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "fact", "fact"]} {"doc_id": "BJGq_QclM", "text": ["This paper leverages how deep Bayesian NNs, in the limit of infinite width, are Gaussian processes (GPs). ", "After characterizing the kernel function, this allows us to use the GP framework for prediction, model selection, uncertainty estimation, etc.", "- Pros of this work The paper provides a specific method to efficiently compute the covariance matrix of the equivalent GP and shows experimentally on CIFAR and MNIST the benefits of using the this GP as opposed to a finite-width non-Bayesian NN.", "The provided phase analysis and its relation to the depth of the network is also very interesting.", "Both are useful contributions as long as deep wide Bayesian NNs are concerned. ", "A different question is whether that regime is actually useful.", "- Cons of this work Although this work introduces a new GP covariance function inspired by deep wide NNs, ", "I am unconvinced of the usefulness of this regime for the cases in which deep learning is useful. ", "For instance, looking at the experiments, we can see that on MNIST-50k (the one with most data, and therefore, the one that best informs about the \"true\" underlying NN structure) the inferred depth is 1 for the GP and 2 for the NN, i.e., not deep. ", "Similarly for CIFAR, where only up to depth 3 is used. ", "None of these results beat state-of-the-art deep NNs.", "Also, the results about the phase structure show how increased depth makes the parameter regime in which these networks work more and more constrained. ", "In [1], it is argued that kernel machines with fixed kernels do not learn a hierarchical representation. ", "And such representation is generally regarded as essential for the success of deep learning. ", "My impression is that the present line of work will not be relevant for deep learning and will not beat state-of-the-art results ", "because of the lack of a structured prior. ", "In that sense, to me this work is more of a negative result informing that to be successful, deep Bayesian NNs should not be wide and should have more structure to avoid reaching the GP regime.", "- Other comments:In Fig. 5, use a consistent naming for the axes (bias and variances).", "In Fig. 1, I didn't find the meaning of the acronym NN with no specified width.", "Does the unit norm normalization used to construct the covariance disallow ARD input selection?", "[1] Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. The Curse of Dimensionality for Local Kernel Machines. 2005."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "request", "reference"]} {"doc_id": "SkwCEXalM", "text": ["To speed up RL algorithms, the authors propose a simple method based on utilizing expert demonstrations. ", "The proposed method consists in explicitly learning a prediction function that maps each time-step into a state. ", "This function is learned from expert demonstrations. ", "The cost of visiting a state is then defined as the distance between that state and the predicted state according to the learned function. ", "This reward is then used in standard RL algorithms to learn to stick close to the expert's demonstrations. ", "An on-loop variante of this method consists of learning a function that maps each state into a next state according to the expert, instead of the off-loop function that maps time-steps into states.", "While the experiments clearly show the advantage of this method, ", "this is hardly surprising or novel. ", "The concept of encoding the demonstration explicitly in the form of a reward has been around for over a decade. ", "This is the most basic form of teaching by demonstration. ", "Previous works had used other models for generalizing demonstrations (GMMs, GPs, Kernel methods, neural nets etc..). ", "This paper uses a three layered fully connected auto-encoder (which is not that deep of a model, btw) for the same purpose. ", "The idea of using this model as a reward instead of directly cloning the demonstrations is pretty straightforward. ", "Other comments: - Most IRL methods would work just fine by defining rewards on states only and ignoring actions all together. ", "If you know the transition function, you can choose actions that lead to highly rewarding states, ", "so you don't need to know the expert's executed actions.", "- \"We assume that maximizing likelihood of next step prediction in equation 1 will be globally optimized in RL\". ", "Could you elaborate more on this assumption? ", "Your model finds rewards based on local state features, where a greedy (one-step planning) policy would reproduce the expert's demonstrations (if the system is deterministic). ", "It does not compare the global performance of the expert to alternative policies (as is typically done in IRL).", "- Related to the previous point: a reward function that makes every step of the expert optimal may not be always exist. ", "The expert may choose to go to terrible states with the hope of getting to a highly rewarding state in the future. ", "Therefore, the objective functions set in this paper may not be the right ones, unless your state description contains features related to future states so that you can incorporate future rewards in the current state (like in the reacher task, where a single image contains all the information about the problem). ", "What you need is actually features that can capture the value function (like in DQN) and not just the immediate reward (as is done in IRL methods). ", "- What if in two different trajectories, the expert chooses opposite actions for the same state appearing in both trajectories? ", "For example, there are two shortest paths to a goal, one starts with going left and another starts with going right. ", "If you try to generate a state that minimizes the sum of distances to the two states (left and right ones), then you may choose to remain in the middle, which is suboptimal. ", "You wouldn't have this issue with regular IRL techniques, ", "because you can explain both behaviors with future rewards instead of trying to explain every action of the expert using only local state description."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "quote", "request", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "fact", "fact", "fact", "fact"]} {"doc_id": "BJfU-qDeG", "text": ["The authors provide an algorithm-agnostic active learning algorithm for multi-class classification.", "The core technique is to construct a coreset of points whose labels inform the labels of other points.", "The coreset construction requires one to construct a set of points which can cover the entire dataset.", "While this is NP-hard problem in general,", "the greedy algorithm is 2-approximate.", "The authors use a variant of the greedy algorithm along with bisection search to solve a series of feasibility problems to obtain a good cover of the dataset each time.", "This cover tells us which points are to be queried.", "The reason why choosing the cover is a good idea is", "because under suitable Lipschitz continuity assumption the generalization error can be controlled via an appropriate value of the covering radius in the data space.", "The authors use the coreset construction with a CNN to demonstrate an active learning algorithm for multi-class classification.", "The experimental results are convincing enough to show that it outperforms other active learning algorithms.", "However, I have a few major and minor comments.", "Major comments: 1. The proof of Lemma 1 is incomplete.", "We need the Lipschitz constant of the loss function.", "The loss function is a function of the CNN function and the true label.", "The proof of lemma 1 only establishes the Lipschitz constant of the CNN function.", "Some more extra work is needed to derive the lipschitz constant of the loss function from the CNN function.", "2. The statement of Prop 1 seems a bit confusing to me.", "the hypothsis says that the loss on the coreset = 0.", "But the equation in proposition 1 also includes the loss on the coreset.", "Why is this term included.", "Is this term not equal to 0?", "3. Some important works are missing.", "Especially works related to pool based active learning, and landmark results on labell complexity of agnostic active learning.", "UPAL: Unbiased Pool based active learning by Ganti & Gray. http://proceedings.mlr.press/v22/ganti12/ganti12.pdf", "Efficient active learning of half-spaces by Gonen et al. http://www.jmlr.org/papers/volume14/gonen13a/gonen13a.pdf", "A bound on the label complexity of agnostic active learning. http://www.machinelearning.org/proceedings/icml2007/papers/375.pdf", "4. The authors use L_2 loss as their objective function.", "This is a bit of a weird choice", "given that they are dealing with multi-class classification and the output layer is a sigmoid layer, making it a natural fit to work with something like a cross-entropy loss function.", "I guess the theoretical results do not extend to cross-entropy loss,", "but the authors do not mention these points anywhere in the paper.", "For example, the ladder network, which is one of the networks used by the authors is a network that uses cross-entropy for training.", "Minor-comment: 1. The feasibility program in (6) is an MILP.", "However, the way it is written it does not look like an MILP.", "It would have been great had the authors mentioned that u_j \\in {0,1}.", "2. The authors write on page 4, \"Moreover, zero training error can be enforced by converting average loss into maximal loss\".", "It is not clear to me what the authors mean here.", "For example, can I replace the average error in proposition 1, by maximal loss?", "Why can I do that?", "Why would that result in zero training error?", "On the whole this is interesting work and the results are very nice.", "But, the proof for Lemma 1 seems incomplete to me,", "and some choices (such as choice of loss function) are unjustified.", "Also, important references in active learning literature are missing."], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "fact", "fact", "request", "request", "evaluation", "evaluation", "reference", "reference", "reference", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "quote", "evaluation", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation"]} {"doc_id": "r11STCqxG", "text": ["This paper presents a generic unbiased low-rank stochastic approximation to full rank matrices that makes it possible to do online RNN training without the O(n^3) overhead of real-time recurrent learning (RTRL).", "This is an important and long-sought-after goal of connectionist learning", "and this paper presents a clear and concise description of why their method is a natural way of achieving that goal, along with experiments on classic toy RNN tasks with medium-range time dependencies for which other low-memory-overhead RNN training heuristics fail.", "My only major complaint with the paper is that it does not extend the method to large-scale problems on real data, for instance work from the last decade on sequence generation, speech recognition or any of the other RNN success stories that have led to their wide adoption", "(eg Graves 2013, Sutskever, Martens and Hinton 2011 or Graves, Mohamed and Hinton 2013).", "However, if the paper does achieve what it claims to achieve, I am sure that many people will soon try out UORO to see if the results are in any way comparable."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "reference", "evaluation"]} {"doc_id": "Bk2K_F9lz", "text": ["The paper presents an off-policy actor-critic method for learning a stochastic policy with entropy regularization. ", "It is a direct extension of maximum entropy reinforcement learning for Q-learning (recently called soft-Q learning), and named soft actor-critic (SAC). ", "Empirically SAC is shown to outperform DDPG significantly in terms of stability and sample efficiency, and can solve relatively difficult tasks that previously only on-policy (or hybrid on-policy/off-policy) method such as TRPO/PPO can solve stably. ", "Besides entropy regularization, it also introduces multi-modal policy parameterization through mixture of Gaussians that enables diverse, on-policy exploration. ", "The main appeal of the paper is the strong empirical performance of this new off-policy method in continuous action benchmarks. ", "Several design choices could be the key, ", "so it is encouraged to provide more ablation studies on these, which would be highly valuable for the community. ", "In particular, - Amortization of Q and \\pi through fitting state value function", "- On-policy exploration vs OU process based off-policy exploration", "- Mixture vs non-mixture-based stochastic policy", "- SAC vs soft Q-learning", "Another valuable discussion to be had is the stability of off-policy algorithm comparing Q-learning versus actor-critic method.", "Pros: - Simple off-policy algorithm that achieves significantly better performance than existing off-policy baseline algorithms", "- It allows on-policy exploration in off-policy learning, partially thanks to entropy regularization that prevents variance from shrinking to 0. ", "It could be considered a major success of off-policy algorithm that removes heuristic exploration noise.", "Cons: - Method is relatively simple extension from existing work in maximum entropy reinforcement learning. ", "It is unclear what aspects lead to significant improvements in performance due to insufficient ablation studies. ", "Other question: - Above Eq. 7 it discusses that fitting a state value function wrt Q and \\pi is shown to improve the stability significantly. ", "Is this comparison with directly estimating state value using finite samples? ", "If so, is the primary instability due to variance of the estimate, which can be avoided by drawing a lot of samples or do full integration (still reasonably tractable for finite mixture model)? ", "Or, is the instability from elsewhere? ", "By having SGD-based fitting of state value function, it appears to simulate slowly changing target values (similar role as target networks). ", "If so, could a similar technique be used with DDPG and get more stable performance?"], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "request", "request", "request", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "request", "evaluation", "request"]} {"doc_id": "B1FEuWcez", "text": ["This paper describes an attempt of improving information flow in deep networks (but is used and tested here with seq2seq models although it is reality unrelated to seq2seq models per se). ", "Slightly different from Resnet the information flow is improved by not just adding the outputs from previous layers but instead concatenating the outputs from previous layers with the current outputs. ", "The authors claim better convergence speed and better results for a similar number of parameters although the differences seems to be in the noise. ", "Overall this is an OK technique but in my opinion not really novel enough to justify a whole paper about it ", "as it seems more like a relatively minor architecture tweak. ", "The results seem to indicate that there were some problems with getting deeper networks to work for the baseline (why is in Table 3 baseline-6L worse than baseline-4L?) for which the reason could be a multitude of issues probably related to hyper-parameter tuning. ", "What is also missing is a an analysis of the negative consequences of this technique -- for example, doesn't the number of parameters increase with the depth of the network because of the concatenation? ", "Also, it would have been good to see more experiments with smaller baseline networks as well to match the smaller DenseNet networks in Table 1 and 2. ", "Finally, the writing of the paper could be improved a lot: ", "The basic idea is not well described (however, many times repeated) ", "and the grammar is often wrong ", "and also there are some typos."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "request", "request", "evaluation", "evaluation", "evaluation"]} {"doc_id": "HJzVc2sxf", "text": ["This paper proposes a new method to train residual networks in which one starts by training shallow ResNets, doubling the depth and warm starting from the previous smaller model in a certain way, and iterating.", "The authors relate this idea to a recent dynamical systems view of ResNets in which residual blocks are viewed as taking steps in an Euler discretization of a certain differential equation.", "This interpretation plays a role in the proposed training method by informing how the \u201cstep sizes\u201d in the Euler discretization should change when doubling the depth of the network.", "The punchline of the paper is that the authors are able to achieve similar performance as \u201cfull ResNet training\u201d but with significantly reduced training time.", "Overall, the proposed method is novel", "\u2014 even though this idea of going from shallow to deep is natural for residual networks,", "tying the idea to the dynamical systems perspective is elegant.", "Moreover the paper is clearly written.", "Experimental results are decent", "\u2014 there are clear speedups to be had based on the authors' experiments.", "However it is unclear if these gains in training speed are significant enough for people to flock to using this (more complicated) method of training.", "I only have a few small questions/comments: * A more naive way to do multi-level training would be to again iteratively double the depth, but perhaps not halve the step size.", "This might be a good baseline to compare against to demonstrate the value of the dynamical systems viewpoint.", "* One thing I\u2019m unclear on is how convergence was assessed\u2026", "my understanding is that the training proceeds for a fixed number of epochs (?)", "- but shouldn\u2019t this also depend on the depth in some way?", "* Would the speedups be more dramatic for a larger dataset like Imagenet?", "* Finally, not being very familiar with multigrid methods from the numerical methods literature", "\u2014 I would have liked to hear about whether there are deeper connections to these methods."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "request", "non-arg", "request"]} {"doc_id": "Syi9ojdgf", "text": ["This paper proposed a new parametrization scheme for weight matrices in neural network based on the Householder reflectors to solve the gradient vanishing and exploding problems in training. ", "The proposed method improved two previous papers:", "1) stronger expressive power than Mahammedi et al. (2017),", "2) faster gradient update than Vorontsov et al. (2017).", "The proposed parametrization scheme is natrual from numerical linear algebra point of view and authors did a good job in Section 3 in explaining the corresponding expressive power. ", "The experimental results also look promising. ", "It would be nice if the authors can analyze the spectral properties of the saddle points in linear RNN (nonlinear is better but it's too difficult I believe). ", "If the authors can show the strict saddle properties then as a corollary, (stochastic) gradient descent finds a global minimum. ", "Overall this is a strong paper ", "and I recommend to accept."], "labels": ["fact", "fact", "reference", "reference", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation"]} {"doc_id": "HyJlQlQWf", "text": ["The paper considers multi-task setting of machine learning.", "The first contribution of the paper is a novel PAC-Bayesian risk bound.", "This risk bound serves as an objective function for multi-task machine learning.", "A second contribution is an algorithm, called LAP, for minimizing a simplified version of this objective function.", "LAP algorithm uses several training tasks to learn a prior distribution P over hypothesis space.", "This prior distribution P is then used to find a posterior distribution Q that minimizes the same objective function over the test task.", "The third contribution is an empirical evaluation of LAP over toy dataset of two clusters and over MNIST.", "While the paper has the title of \"life-long learning\",", "the authors admit that all experiments are in multi-task setting, where the training is done over all tasks simultaneously.", "The novel risk bound and LAP algorithm can definitely be applied to life-long setting, where training tasks are available sequentially.", "But since there is no empirical evaluation in this setting,", "I suggest to adjust the title of the paper.", "The novel risk bound of the paper is an extension of the bound from [Pentina & Lampert, ICML 2014].", "The extension seems to be quite significant.", "Unlike the bound of [Pentina & Lampert, ICML 2014], the new bound allows to re-use many different PAC-Bayesian complexity terms that were published previously.", "I liked risk bound and optimization sections of the paper.", "But I was less convinced by the empirical experiments.", "Since the paper improves the risk bound of [Pentina & Lampert, ICML 2014],", "I expected to see an empirical comparison of LAP and optimization algorithm from the latter paper.", "To make such comparison fair, both optimization algorithms should use the same base algorithm, e.g. ridge regression, as in [Pentina & Lampert, ICML 2014].", "Also I suggest to use the datasets from the latter paper.", "The experiment with multi-task learning over MNIST dataset looks interesting,", "but it is still a toy experiment.", "This experiment will be more convincing with more sophisticated datasets (CIFAR-10, ImageNet) and architectures (e.g. Inception-V4, ResNet).", "Minor remarks: Section 6, line 4: \"Combing\" -> \"Combining\"", "Page 14, first equation: There should be \"=\" before the second expectation."], "labels": ["fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "evaluation", "evaluation", "request", "request", "request"]} {"doc_id": "SJk7H29xM", "text": ["This paper addresses the question of unsupervised clustering with high classification performance.", "They propose a deep variational autoencoder architecture with categorical latent variables at the deepest layer and propose to train it with modifications of the standard variational approach with reparameterization gradients.", "The model is tested on a medical imagining dataset where the task is to distinguish healthy from pathological lymphocytes from blood samples.", "I am not an expert on this particular dataset,", "but to my eye the results look impressive.", "They show high sensitivity and high specificity.", "This paper may be an important contribution to the medical imaging community.", "My primary concern with the paper is the lack of novelty and relatively little in the way of contributions to the ICLR community.", "The proposed model is a simple variant on the standard VAE models", "(see for example the Ladder VAE https://arxiv.org/abs/1602.02282 for deep models with multiple stochastic layers).", "This would be OK if a thorough evaluation on at least two other datasets showed similar improvements as the lymphocytes dataset.", "As it stands, it is difficulty for me to assess the value of this model.", "Minor questions / concerns: - The authors claim in the first paragraph of 3.2 that deterministic mappings lack expressiveness.", "Would be great to see the paper take this claim seriously and investigate it.", "- In equation (13) it isn't clear whether you use q_phi to be the discrete mass or the concrete density.", "The distinction is discussed in https://arxiv.org/abs/1611.00712", "- Would be nice to report the MCC in experimental results."], "labels": ["fact", "fact", "fact", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "request", "evaluation", "fact", "request"]} {"doc_id": "SJ13MSaxf", "text": ["The authors demonstrate experimentally a problem with the way common latent space operations such as linear interpolation are performed for GANs and VAEs. ", "They propose a solution based on matching distributions using optimal transport. ", "Quite heavy machinery to solve a fairly simple problem, ", "but their approach is practical and effective experimentally ", "(though the gain over the simple SLERP heuristic is often marginal). ", "The problem they describe (and so the solution) deserves to be more widely known.", "Major comments: The paper is quite verbose, probably unnecessarily so. ", "Firstly, the authors devote over 2 pages to examples that distribution mismatches can arise in synthetic cases (section 2). ", "This point is well made by a single example (e.g. section 2.2) ", "and the interesting part is that this is also an issue in practice (experimental section). ", "Secondly, the authors spend a lot of space on the precise derivation of the optimal transport map for the uniform distribution. ", "The fact that the optimal transport computation decomposes across dimensions for pointwise operations is very relevant, and the matching of CDFs, ", "but I think a lot of the mathematical detail could be relegated to an appendix, especially the detailed derivation of the particular CDFs.", "Minor comments: It seems worth highlighting that in practice, for the common case of a Gaussian, the proposed method for linear interpolation is just a very simple procedure that might be called \"projected linear interpolation\", where the generated vector is multiplied by a constant. ", "All the optimal transport theory is nice, ", "but it's helpful to know that this is simple to apply in practice.", "Might I suggest a very simple approach to fixing the distribution mismatch issue? ", "Train with a spherical uniform prior. ", "When interpolating, project the linear interpolation back to the sphere. ", "This matches distribution, and has the attractive property that the entire geodesic between two points lies in a region with typical probability density. ", "This would also work for vicinity sampling.", "In section 1, overfitting concerns seem like a strange way to motivate the desire for smoothness. ", "Overfitting is relatively easy to compensate for, ", "and investigating the latent space is interesting regardless.", "When discussing sampling from VAEs as opposed to GANs, it would be good to mention that one has to sample from p(x | z) not just p(z).", "Lots of math typos such as t - 1 should be 1 - t in (2), \"V times a times r\" instead of \"Var\" in (3) and \"s times i times n\" instead of \"sin\", etc, sqrt(1) * 2 instead of sqrt(12), inconsistent bolding of vectors. ", "Also strange use of blackboard bold Z to mean a vector of random variables instead of the integers.", "Could cite an existing source for the fact that most mass for a Gaussian is concentrated on a thin shell (section 2.2), ", "e.g. David MacKay Information Theory, Inference and Learning Algorithms.", "At the end of section 2.4, a plot of the final 1D-to-1D optimal transport function (for a few different values of t) for the uniform case would be incredibly helpful.", "Section 3 should be a subsection of section 2.", "For both SLERP and the proposed method, there's quite a sudden change around the midpoint of the interpolation in Figure 2. ", "It would be interesting to plot more points around the midpoint to see the transition in more detail. ", "(A small inkling that samples from the proposed approach might change fastest qualitatively near the midpoint of the interpolation perhaps maybe be seen in Figure 1, ", "since the angle is changing fastest there??)"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "request", "reference", "request", "request", "evaluation", "request", "evaluation", "evaluation"]} {"doc_id": "B143HDlWM", "text": ["The authors present a variant of the adversarial feature learning (AFL) approach by Edwards & Storkey. ", "AFL aims to find a data representation that allows to construct a predictive model for target variable Y, ", "and at the same time prevents to build a predictor for sensitive variable S. ", "The key idea is to solve a minimax problem where the log-likelihood of a model predicting Y is maximized, and the log-likelihood of an adversarial model predicting S is minimized. ", "The authors suggest the use of multiple adversarial models, which can be interpreted as using an ensemble model instead of a single model.", "The way the log-likelihoods of the multiple adversarial models are aggregated does not yield a probability distribution as stated in Eq. 2. ", "While there is no requirement to have a distribution here ", "- a simple loss term is sufficient ", "- the scale of this term differs compared to calibrated log-likelihoods coming from a single adversary. ", "Hence, lambda in Eq. 3 may need to be chosen differently depending on the adversarial model. ", "Without tuning lambda for each method, the empirical experiments seem unfair. ", "This may also explain why, for example, the baseline method with one adversary effectively fails for Opp-L. ", "A better comparison would be to plot the performance of the predictor of S against the performance of Y for varying lambdas. ", "The area under this curve allows much better to compare the various methods.", "There are little theoretical contributions. ", "Basically, instead of a single adversarial model - e.g., a single-layer NN or a multi-layer NN - the authors propose to train multiple adversarial models on different views of the data. ", "An alternative interpretation is to use an ensemble learner where each learner is trained on a different (overlapping) feature set. ", "Though, there is no theoretical justification why ensemble learning is expected to better trade-off model capacity and robustness against an adversary. ", "Tuning the architecture of the single multi-layer NN adversary might be as good?", "In short, in the current experiments, the trade-off of the predictive performance and the effectiveness of obtaining anonymized representations effectively differs between the compared methods. ", "This renders the comparison unfair. ", "Given that there is also no theoretical argument why an ensemble approach is expected to perform better, ", "I recommend to reject the paper."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation"]} {"doc_id": "HyyQKdklf", "text": ["Summary: Based on ideas within the context of kernel theory, the authors consider post-training of NNs as an extra training step, which only optimizes the last layer of the network.", "This additional step makes sure that the embedding, or representation, of the data is used in the best possible way for the considered task", "(which is also reflected in the experiments).", "According to the authors, the contributions are the following: 1. Post-training step: keeping the rest of the NN frozen (after training), the method trains the last layer in order to \"make sure\" that the representation learned is used in the most efficient way.", "2. Highlighting connections with kernel techniques and RKHS optimization (like kernel ridge regression).", "3. Experimental results.", "Clarity: The paper is well-written, the main ideas well-clarified.", "Importance: While the majority of papers nowadays focuses on the representation part (i.e., how we get to \\Phi_{L-1}(x)),", "this paper assumes this is given and proposes how to optimize the weights in the final step of the algorithm.", "This by itself is not enough to boost the performance universally (e.g., if \\Phi_{L-1} is not well-trained, the problem is deeper than training the last layer);", "however, it proposes an additional step that can be used in most NN architectures.", "From that front (i.e., proposing to do something different than simply training a NN), I find the paper interesting, that might attract some attention at the conference.", "On the other hand, to my humble opinion, the experimental results do not show a significant gain in the performances of all networks (esp. Figure 3 and Table 1 are within the range of statistical error).", "In order to state something like this universally, either one needs to perform experiments with more than just MNIST/CIFAR datasets, or even more preferably, prove that the algorithm performs better.", "Originality: It would be great to have some more theory (if any) for the post-training step, or investigate more cases, rather than optimizing only the last layer.", "Comments: 1. I assume the authors focused in the last layer of the NN for simplicity,", "but is there a reason why one might want to focus only on the last layer?", "One reason is convexity in W of the problem (2).", "Any other?", "2. Have the authors considered (even in practice only) to include training of the last 2 layers of the NN?", "The authors state this question in the future direction,", "but it would make the paper more complete to consider it here."], "labels": ["fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "request", "evaluation", "request", "evaluation", "request", "non-arg", "fact", "request"]} {"doc_id": "r1k_ETYlM", "text": ["This article aims at understanding the role played by the different words in a sentence, taking into account their order in the sentence.", "In sentiment analysis for instance, this capacity is critical to model properly negation.", "As state-of-the-art approaches rely on LSTM,", "the authors want to understand which information comes from which gate.", "After a short remainder regarding LSTM, the authors propose a framework to disambiguate interactions between gates.", "In order to obtain an analytic formulation of the decomposition, the authors propose to linearize activation functions in the network.", "In the experiment section, authors compare themselves to a standard logistic regression (based on a bag of words representation).", "They also check the unigram sentiment scores (without context).", "The main issue consists in modeling the dynamics inside a sentence (when a negation or a 'used to be' reverses the sentiment).", "The proposed approach works fine on selected samples.", "The related work section is entirely focused on deep learning", "while the experiment section is dedicated to sentiment analysis.", "This section should be rebalanced.", "Even if the authors claim that their approach is general,", "they also show that it fits well the sentiment analysis task in particular.", "On top of that, a lot of fine-grained sentiment analysis tools has been developed outside deep-learning:", "the authors should refer to those works.", "Finally, authors should provide some quantitative analysis on sentiment classification:", "a lot of standard benchmarks are widely use in the literature", "and we need to see how the proposed method performs with respect to the state-of-the-art.", "Given the chosen tasks, this work should be compared to the beermind system:", "http://deepx.ucsd.edu/#/home/beermind", "and the associated publication", "http://arxiv.org/pdf/1511.03683.pdf"], "labels": ["fact", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "evaluation", "request", "request", "evaluation", "request", "request", "reference", "non-arg", "reference"]} {"doc_id": "ryv9d98lf", "text": ["This should be the first work which introduces in the causal structure into the GAN, to solve the label dependency problem. ", "The idea is interesting and insightful. ", "The proposed method is theoretically analyzed and experimentally tested. ", "Two minor concerns are 1) what is the relationship between the anti-labeler and and discriminator? ", "2) how the tune related weight of the different objective functions."], "labels": ["fact", "evaluation", "fact", "request", "request"]} {"doc_id": "Hk8Nwx9xf", "text": ["Strengths: * Very simple approach, amounting to coupled training of \"e\" identical copies of a chosen net architecture, whose predictions are fused during training. ", "This forces the different model instances to become more complementary.", "* Perhaps counterintuitively, experiments also show that coupled ensembling leads to individual nets that perform better than those produced by separate training.", "* The practical advantages of the proposed approach are twofold: 1. Given a fixed parameter budget, coupled ensembling leads to better accuracy than a single net or an ensemble of disjointly-trained nets.", "2. For the same accuracy, coupled ensembling yields significant parameter savings.", "Weaknesses: * Although results are very strong, ", "the proposed models do not outperform the state-of-the-art, except for the models reported in Table 4, ", "which however were obtained by *traditional* ensembling of coupled ensembles. ", "* Coupled ensembling requires joint training of all nets in the ensemble ", "and thus is limited by the size of the model that can be fit in memory. ", "Conversely, traditional ensembling involves separate training of the different instances ", "and this enables the learning of an arbitrary number of individual nets. ", "* I am surprised by the results in Table 2, ", "which suggest that the optimal number of nets in the ensemble is remarkably low (only 3!). ", "It'd be valuable to understand whether this kind of result holds for other network architectures or whether it is specific to this choice of net.", "* Strictly speaking it is correct to refer to the individual nets in the ensembles as \"branches\" and \"basic blocks.\" ", "Nevertheless, I find the use of these terms confusing in the context of the proposed approach, ", "since they are commonly used to denote concepts different from those represented here. ", "I would recommend refraining from using these terms here.", "Overall, the paper provides limited technical novelty. ", "Yet, it reveals some interesting empirical findings about the benefits of coordinated training of models in an ensemble."], "labels": ["evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation"]} {"doc_id": "SJ3tICFlz", "text": ["The authors combine an ensemble of DNNs as model for the dynamics with TRPO. ", "The ensemble is used in two steps: First to collect imaginary roll-outs for TRPO and secondly to estimate convergence of the algorithm. ", "The experiments indicate superior performance over the baselines.", "The paper is well-written ", "and the experiments indicate good results. ", "However, idea of using ensembles in the context of (model-based) RL is not novel, ", "and it comes at the cost of time complexity. ", "Therefore, the method should utilize the advantage an ensemble provides to its full extent. ", "The main strength of an ensemble is to provide lower test error, but also some from of uncertainty estimate given by the spread of the predictions. ", "The authors mainly utilize the first, but to a lesser extent the second advantage (the imaginary roll-outs will utilize the spread to generate possible outcomes). ", "Ideally the exploration should also be guided by the uncertainty (such as VIME).", "Related, what where the arguments in favor of an ensemble compared to Bayesian neural networks (possibly even as simple as using MH-dropout)? ", "BNNs provide a stronger theoretical justification that the predictive uncertainty is meaningful.", "Can the authors comment on the time-complexity of the proposed methods compared to the baselines? ", "In Fig. 2 the x-axis is the time step of the real data. ", "But I assume it took a different amount of time for each method to reach step t. ", "The same argument can be made for Fig. 4. ", "It seems here that in snake the larger ensembles reach convergence the quickest, ", "but I expect this effect to be reversed when considering actual training time.", "In total I think this paper can provide a useful addition to the literature. ", "However, the proposed approach does not have strong novelty ", "and I am not fully convinced if the additional burden on time complexity outweighs the improved performance.", "Minor: In Sec. 2: \"Both of these approaches assume a fixed dataset of samples which are collected before the algorithm starts operating.\" ", "This is incorrect, ", "while these methods consider the domain of fixed datasets, ", "the algorithms themselves are not limited to this context."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "quote", "fact", "fact", "fact"]} {"doc_id": "BkL0g3a1f", "text": ["Summary: The paper focuses on the characterization of the landscape of deep neural networks; i.e., when and why local minima are global, what are the conditions for saddle critical points, etc. ", "The paper covers a somewhat wide range of deep nets (from shallow with linear activation to deeper with non-linear activation); ", "it focuces only on feed forward neural networks.", "As the authors state, this paper provides a unifying perspective to the subject ", "(it justifies the results of others through this unifying theory, ", "but also provides new results; e.g., there are results that do not depend on assumptions on the target data matrix Y).", "Originality: The paper provides similar results to previous work, while removing some of the assumptions made in previous work. ", "In that sense, the originality of the results is weak, ", "but definitely there is some novelty in the methodology used to get to these results. ", "Thus, I would say original.", "Importance: The paper deals with the important problem of when and why training algorithms might get to global/local/saddle critical points. ", "While there are no direct connections with generalization properties, ", "characterizing the landscape of neural networks is an important topic to make further steps into better understanding of deep learning. ", "It will attract some attention at the conference.", "Clarity: The paper is well-written ", "- some parts need improvement, ", "but overall I'm satisfied with the current version.", "Comments: 1. If problem (4) is not considered at all in this paper (in its full generality that considers matrix completion and matrix sensing as special cases), then the authors could just start with the model in (5).", "2. Remark 1 has a nice example ", "- could this example be shown with Y not being the all-zeros vector?", "3. In section 5, the authors make a connection with the work of Ge et al. 2016. ", "They state that the problems in (10)-(11) constitute generalizations of the symmetric matrix completion case, considered in Ge et al. 2016. ", "However, in that work, the main difficulty of proving global optimality comes from the randomness of the sampling mask operator (which introduces the notion of incoherence and requires results in expectation). ", "It is not clear, and maybe it is an overstatement, that the results in section 5 generalize that work. ", "If that is the case, could the authors describe this a bit further?"], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "fact", "fact", "fact", "evaluation", "request"]} {"doc_id": "ByL47G5lM", "text": ["The paper proposed a new regularization approach that simultaneously encourages the weight vectors (W) to be sparse and orthogonal to each other.", "The argument is that the sparsity helps to eliminate the irrelevant feature vectors by making the corresponding weights zero.", "Nearly orthogonal sparse vectors will have zeros at different indexes", "and hence, encourages the weight vectors to have small overlap in terms of indices of nonzero entries (called support).", "Small overlap in support of weight vectors, aids interpretability", "as each weight vector is associated with a unique subset of feature vectors.", "For example, in the topic model, small overlap encourages, each topic to have unique set of representation words.", "The proposed approach used L1 regularizer for enforcing sparsity in W.", "For enforcing orthogonality between different weight vectors (wi, wj), the log-determinant divergence (LDD) regularization term encourages the Gram Matrix G (Gij = wiTwj) to be close to an identity matrix I.", "The authors applied and tested the performance of proposed approach on Neural Network and Sparse Coding (SC) machine learning models.", "The authors validated the need for their proposed regularizer through experiments on 4 datasets (3 text and 1 images).", "Major * The novelty of the paper is not clear.", "Neither L1 no logdet() are novel regularizers (see the literature of Determinatal Point Process).", "With the presence of the auto-differentiator, one cannot claim the making derivative a novelty.", "* L1 is also encourages diversity although as explicit as logdet.", "This is also obvious from Fig 2.", "Perhaps the advantage of diversity is in interpretability", "but that is hard to quantify and the authors did not put enough effort to do that;", "we only have small anecdotal results in section 4.3.", "* The Table 1 is not convincing", "because one can argue, for example, gun (vec 1) and weapon (vec 4) are colinear.", "* In section 4.2, the authors experimented with SC on text dataset.", "The overlap score decreases as the strength of regularization increases.", "The authors didn\u2019t show the effect of increasing the regularization strength on the model accuracy and convergence time.", "This analysis is important to make sure, the decrease in overlap score is not coming at the expense of model accuracy and performance.", "* In section 4.4, increase in test set accuracy and difference between test and train set accuracy is used to validate the claim, that the proposed regularizer helps reducing over fitting.", "In Table-2, , the test accuracy increases between SC and LDD-L1 SC", "while the train accuracy remains almost the same.", "Also, the authors didn\u2019t do any cross validation to support their claim.", "The difference is numbers is too small to support the claim.", "* In section on LSTM for Language Modeling, the perplexity score of LDD-L1 regularization on PytorchLM received perplexity score of 1.2 lower than without regularization.", "Although, the author mentions it as a significant reduction,", "the lowest perplexity score in Table 3 is significantly lower than this result.", "It\u2019s not clear how 1.2 reduction in perplexity is significant and why the method should be preferred while much better models already exists.", "* Results of the best perplexity model, Neural Architecture Search + WT V2, with proposed regularization would also help, validating the generalizability claims of the new approach.", "* In CNN for Image Classification section, details of increase interpretability of the model, in terms of classification decision, is missing.", "* In Table-4, the proposed LDD-L1 WideResNet is not the best results.", "Results of adding the proposed regularization, to the best know method (Pyramid Sep Drop) would be interesting.", "* The proposed regularization claims to provide more interpretable representation and less overfit model.", "The given experiments are inadequate to validate the claims.", "* A more extensive experimentation is required to validate the applicability of the method.", "* In SC, aj are the linear coefficients or the coefficient vector of the j-th sample.", "If A \u2208 Rm\u00d7n then aj \u2208 Rm\u00d71 and j ranges between [1,n] as in equation 6.", "The notation in section 2.2, Sparse Coding section is misleading", "as j ranges between [1,m].", "* In Related works, the authors mention previous work done on interpreting the results of the machine learning models.", "Related works on enhancing interpretability and reducing overfitting by using regularization is missing."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "fact", "fact", "request", "fact", "evaluation", "request", "fact", "fact", "evaluation", "fact", "fact", "fact"]} {"doc_id": "SkF57lqez", "text": ["The authors propose a new regularization term modifying the VAE (Kingma et al 2013) objective to encourage learning disentangling representations.", "Specifically, the authors suggest to add penalization to ELBO in the form of -KL(q(z)||p(z)) , which encourages a more global criterion than the local ELBOs.", "In practice, the authors decide that the objective they want to optimize is unwieldy", "and resort to moment matching of covariances of q(z) and p(z) via gradient descent.", "The final objective uses a persistent estimate of the covariance matrix of q and upgrades it at each mini-batch to perform learning.", "The authors use this objective function to perform experiments measuring disentanglement", "and find minor benefits compared to other objectives in quantitative terms.", "Comments: 1. The originally proposed modification in Equation (4) appears to be rigorous", "and as far as I can tell still poses a lower bound to log(p(x)).", "The proof could use the result posed earlier: KL(q(z)||p(z)) is smaller than E_x KL(q(z|x)||p(z|x)).", "2. The proposed moment matching scheme performing decorrelation resembles approaches for variational PCA and especially independent component analysis.", "The relationship to these techniques is not discussed adequately.", "In addition, this paper could really benefit from an empirical figure of the marginal statistics of z under the different regularizers in order to establish what type of structure is being imposed here and what it results in.", "3. The resulting regularizer with the decorrelation terms could be studied as a modeling choice.", "In the probabilistic sense, regularizers can be seen as structural and prior assumptions on variables.", "As it stands, it is unnecessarily vague which assumptions this extra regularizer is making on variables.", "4. Why is using the objective in Equation (4) not tried and tested and compared to?", "It could be thought that subsampling would be enough to evaluate this extra KL term without any need for additional variational parameters \\psi.", "The reason for switching to the moment matching scheme seems not well motivated here without showing explicitly that Eq (4) has problems.", "5. The model seems to be making on minor progress in its stated goal, disentanglement.", "It would be more convincing to clarify the structural properties of this regularizer in a statistical sense more clearly given that experimentally it seems to only have a minor effect.", "6. Is there a relationship to NICE (Laurent Dinh et al)?", "7. The infogan is also an obvious point of reference and comparison here.", "8. The authors claim that there are no models which can combine GANs with inference in a satisfactory way,", "which is obviously not accurate nowadays given the progress on literature combining GANs and variational inference.", "All in all I find this paper interesting", "but would hope that a more careful technical justification and derivation of the model would be presented given that it seems to not be an empirically overwhelming change."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "request", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "non-arg", "request", "fact", "evaluation", "evaluation", "request"]} {"doc_id": "rJFaZDqgz", "text": ["This is a well-written paper with sound experiments. ", "However, the research outcome is not very surprising. ", "- Only macro-average F-scores are reported. ", "Please present micro-average scores as well.", "- The detailed procedure of relation extraction should be described. ", "How do you use entity type information? ", "(Probably, you did not use entity types.)", "- Table 3: The SotA score of EU-ATR target-disease (i.e. 84.6) should be in bold face.", "- Section 5.3: Your system scorers in Table 3 are not consistent with Table 2 scores. ", "- Page 8. \"Our approach outperforms ...\" ", "The improvement is clear only for SNPPhenA and EU-ADR durg-disease.", "Minor comments: - TreeLSTM --> Tree-LSTM", "- Page 7. connexion --> connection", "- Page 8. four EU-ADR subtasks --> three ...", " - I suggest to conduct transfer learning studies in the similar settings."], "labels": ["evaluation", "evaluation", "fact", "request", "request", "request", "evaluation", "request", "evaluation", "quote", "fact", "request", "request", "request", "request"]} {"doc_id": "HyLL47JGf", "text": ["Overall strength: In this paper, the authors proposed target-aware memory networks to model sentiment interactions between target aspects and the context words with attentions. ", "This work has a well-established motivation: ", "traditional attention for target-dependent sentiment classification cannot model the interaction between target term and context words when making predictions. ", "To solve this problem, the authors proposed five formulations in the final prediction layer. ", "The illustration about the problem is clear, as well as the explanation for the formulations.", "Major concerns: 1.\tThis work brings some modifications to the prediction layer, ", "which is a bit trivial. ", "Although the effect has been shown, ", "the model is too specific to a narrow area, and is not general to be applied in a broad sense. ", "It could have more contribution if the authors model the interactions within the attention model itself, instead of a simple prediction layer, which is problem-dependent.", "2.\tThe experiments are insufficient to show the effectiveness. ", "It would be better to provide some statistics showing how the target-context interaction model outperforms the traditional ones in the special cases like the one shown in Table 4. ", "Only two examples are not convincing.", "3.\tIn section 3, the authors claimed that (5) models the target and context independently. ", "However, in section 4, in (7), the authors claimed the target vector v_t will affect the context shifting their representation to c\u2019_i. ", "This should also work for (5). ", "4.\tThere are too many typos in the paper, e.g., \\alpha is replaced by a, etc.", "Other concerns: 1.\tIt seems that one needs to train at least three embedding matrices: A, C, D which represent input embeddings, output embeddings, and interactive embeddings, respectively. ", "I wonder if this brings redundant parameters that do not guarantee convergence. ", "Why not use one matrix instead? ", "Did the authors try experiments with less embedding matrices?", "2.\tThere is another work that also considers the target-context interaction using interactive attention model. ", "Please refer to this paper \u201cInteractive Attention Networks for Aspect-Level Sentiment Classification\u201d. ", "A comparison is needed.", "3.\tIt is better to provide results in terms of accuracy for both datasets, ", "as previous methods usually use accuracy for comparison. ", "How\u2019s the score of the proposed model compared with the above paper as well as [Tang et al. 2016]?"], "labels": ["fact", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "evaluation", "request", "evaluation", "request", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "request", "fact", "request", "request", "request", "fact", "request"]} {"doc_id": "H1EMeWfgz", "text": ["This paper is in some sense a \"position paper,\" ", "giving a framework for thinking about the loss functions implicitly used by the generator of GAN-type models. ", "It advocates thinking about the loss in a way similar to how it is considered in structured prediction. ", "It also proposes that approximating the dual formulation of various divergences with functions from a parametric class, as is typically done in GAN-type setups, is not only more tractable (computationally and in sample complexity) than the full nonparametric estimation, but also gives a better actual loss.", "Overall, I like the argument here, and think that it is a useful framework for thinking about these things. ", "My main concern is that the practical contribution on top of Liu et al. (2017) might be somewhat limited.", "A few small points: - f-divergences can actually be nonparametrically estimated purely from samples, e.g. with the k-nearest neighbor estimator of https://arxiv.org/abs/1411.2045, or (for certain f-divergences) the kernel density based estimator of https://arxiv.org/abs/1402.2966. ", "These are unlikely to lead to a practical learning algorithm, ", "but could be mentioned in Table 1.", "- The discussion of MMD in the end of section 3.1 is a little off. ", "MMD is fundamentally defined by the kernel choice; ", "Dziugaite et al. (2015) only demonstrated that the Gaussian RBF kernel is a poor choice for MNIST modeling, ", "while the samples of Li et al. (2015) simply by using a mixture of Gaussian kernels were much better. ", "No reasonable fixed kernel is likely to yield good results on a harder image modeling problem, ", "but that is a slightly different message than the one this paragraph conveys.", "- It would be interesting to replicate the analysis of Danihelka et al. (2017) on the Thin-8 dataset. ", "This might help clarify which of the undesirable effects observed in the VAE model here are due to likelihood, and which due to other aspects of VAEs (like the use of the lower bound)."], "labels": ["evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation"]} {"doc_id": "S1_Zyk9xG", "text": ["Neal (1994) showed that a one hidden layer Bayesian neural network, under certain conditions, converges to a Gaussian process as the number of hidden units approaches infinity. ", "Neal (1994) and Williams (1997) derive the resulting kernel functions for such Gaussian processes when the neural networks have certain transfer functions.", "Similarly, the authors show an analogous result for deep neural networks with multiple hidden layers and an infinite number of hidden units per layer, and show the form of the resulting kernel functions. ", "For certain transfer functions, the authors perform a numerical integration to compute the resulting kernels. ", "They perform experiments on MNIST and CIFAR-10, doing classification by scaled regression. ", "Overall, the work is an interesting read, and a nice follow-up to Neal\u2019s earlier observations about 1 hidden layer neural networks. ", "It combines several insights into a nice narrative about infinite Bayesian deep networks. ", "However, the practical utility, significance, and novelty of this work -- in its current form -- are questionable, ", "and the related work sections, analysis, and experiments should be significantly extended. ", "In detail:(1) This paper misses some obvious connections and references, such as ", "* Krauth et. al (2017): \u201cExploring the capabilities and limitations of Gaussian process models\u201d for recursive kernels with GPs.", "* Hazzan & Jakkola (2015): \u201cSteps Toward Deep Kernel Methods from Infinite Neural Networks\u201d for GPs corresponding to NNs with more than one hidden layer.", "* The growing body of work on deep kernel learning, which \u201ccombines the inductive biases and representation learning abilities of deep neural networks with the non-parametric flexibility of Gaussian processes\u201d. ", "E.g.: (i) \u201cDeep Kernel Learning\u201d (AISTATS 2016); ", "(ii) \u201cStochastic Variational Deep Kernel Learning\u201d (NIPS 2016); ", "(iii) \u201cLearning Scalable Deep Kernels with Recurrent Structure\u201d (JMLR 2017). ", "These works should be discussed in the text.", "(2) Moreover, as the authors rightly point out, covariance functions of the form used in (4) have already been proposed. ", "It seems the novelty here is mainly the empirical exploration (will return to this later), and numerical integration for various activation functions. ", "That is perfectly fine ", "-- and this work is still valuable. ", "However, the statement \u201crecently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework\u201d is incorrect. ", "For example, Hazzan & Jakkola (2015) in \u201cSteps Toward Deep Kernel Methods from Infinite Neural Networks\u201d consider GP constructions with more than one hidden layer. ", "Thus the novelty of this aspect of the paper is overstated. ", "See also comment [*] later on the presentation. ", "In any case, the derivation for computing the covariance function (4) of a multi-layer network is a very simple reapplication of the procedure in Neal (1994). ", "What is less trivial is estimating (4) for various activations, ", "and that seems to the major methodological contribution. ", "Also note that multidimensional CLT here is glossed over. ", "It\u2019s actually really unclear whether the final limit will converge to a multidimensional Gaussian with that kernel without stronger conditions. ", "This derivation should be treated more thoroughly and carefully.", "(3) Most importantly, in this derivation, we see that the kernels lose the interesting representations that come from depth in deep neural networks. ", "Indeed, Neal himself says that in the multi-output settings, all the outputs become uncorrelated. ", "Multi-layer representations are mostly interesting ", "because each layer shares hidden basis functions. ", "Here, the sharing is essentially meaningless, ", "because the variance of the weights in this derivation shrinks to zero. ", "In Neal\u2019s case, the method was explored for single output regression, ", "where the fact that we lose this sharing of basis functions may not be so restrictive. ", "However, these assumptions are very constraining for multi-output classification and also interesting multi-output regressions.", "[*]: Generally, in reading the abstract and introduction, we get the impression that this work somehow allows us to use really deep and infinitely wide neural networks as Gaussian processes, and even without the pain of training these networks. ", "\u201cDeep neural networks without training deep networks\u201d. ", "This is not an accurate portrayal. ", "The very title \u201cDeep neural networks as Gaussian processes\u201d is misleading, ", "since it\u2019s not really the deep neural networks that we know and love. ", "In fact, you lose valuable structure when you take these limits, ", "and what you get is very different than a standard deep neural network. ", "In this sense, the presentation should be re-worked.", "(4) Moreover, neural networks are mostly interesting because they learn the representation. ", "To do something similar with GPs, we would need to learn the kernel. ", "But here, essentially no kernel learning is happening. ", "The kernel is fixed. ", "(5) Given the above considerations, there is great importance in understanding the practical utility of the proposed approach through a detailed empirical evaluation. ", "In other words, how structured is this prior and does it really give us some of the interesting properties of deep neural networks, or is it mostly a cute mathematical trick? ", "Unfortunately, the empirical evaluation is very preliminary, and provides no reassurance that this approach will have any practical relevance:", "(i) Directly performing regression on classification problems is very heuristic and unnecessary.", "(ii) Given the loss of dependence between neurons in this approach, it makes sense to first explore this method on single output regression, where we will likely get the best idea of its useful properties and advantages. ", "(iii) The results on CIFAR10 are very poor. ", "We don\u2019t need to see SOTA performance to get some useful insights in comparing for example parametric vs non-parametric, ", "but 40% more error than SOTA makes it very hard to say whether any of the observed patterns hold weight for more competitive architectural choices. ", "A few more minor comments:(i) How are you training a GP exactly on 50k training points? ", "Even storing a 50k x 50k matrix requires about 20GB of RAM. ", "Even with the best hardware, computing the marginal likelihood dozens of times to learn hyperparameters would be near impossible. ", "What are the runtimes?", "(ii) \"One benefit in using the GP is due to its Bayesian nature, so that predictions have uncertainty estimates (Equation (9)).\u201d ", "The main benefit of the GP is not the uncertainty in the predictions, but the marginal likelihood which is useful for kernel learning."], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "evaluation", "reference", "reference", "reference", "reference", "reference", "reference", "request", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "quote", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "quote", "fact"]} {"doc_id": "rkvVN7kGf", "text": ["This paper describes how to use a set of sentences in the source language mapped to another set of sentences in the target sentences instead of using single sentence to sentence samples. ", "The paper claims superior results using the described method.", "Overall, there are a few problems with the paper. ", "1) The arguments for using clusters instead of single sentences are questionable. ", "The paper claims several times that MLE training for NMT faces over-training (or data sparsity) ", "-- while that can be true depending on the corpus and model used, there are well-known remedies for that, for example regularization via dropout (almost everybody uses that). ", "It is not clear why that is not used or at least compared to the method presented. ", "2) The writing of the paper is often unclear (and sometimes grammatically wrong, typos etc. but that aside), ", "there are some made up words/concepts (What is 'Golden Centroid Augmentation\" or \"Model Centroid Augmentation\"? ", "The reason for attention is not to better memorize input information, ", "it is to be able to attend to certain regions in the input. ", "The reason to use RL is to focus on optimizing directly for BLEU score or other metrics instead of likelihood ", "but not for improving on the train/test loss discrepancy. ", "There are lots more examples of unclear statements in this paper ", "-- it should be heavily improved. ", "3) Section 3 and 4 are very hard/impossible to understand, ", "it is not clear how the formulas help the reader to better understand the concept in any way. ", "5) The results presented in this paper given the complexity of the method are just not great ", "-- for example, WMT en-de is 21.3 BLEU reported by you while much older papers report for example 24.67 BLEU (Google's Neural Machine Translation System) ", "-- why not first try to get to state-of-the-art with already published methods and then try to improve on top of that? . ", "6) Finally, what is missing most is simply why a much simpler method (just generate some data using a trained system and use that as additional training data, with details on how much etc.) -- is not directly compared to this very complicated looking method."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "request"]} {"doc_id": "BynNGX9eG", "text": ["This paper collects a cloze-style fill-in-the-missing-word dataset constructed manually by English teachers to test English proficiency.", "Experiments are given which are claimed to show that this dataset is difficult for machines relative to human performance.", "The dataset seems interesting", "but I find the empirical evaluations unconvincing.", "The models used to evaluate machine difficulty are basic language models.", "The problems are multiple choice with at most four choices per question.", "This allows multiple choice reading comprehension architectures to be used.", "A window of words around the blank could be used as the \"question\".", "A simple reading comprehension baseline is to encode the question (a window around the blank) and use the question vector to compute an attention over the passage.", "One can then compute a question-specific representation of the passage and score each candidate answer by the inner product of the question-specific sentence representation and the vector representation of the candidate answer.", "See \"A thorough examination of the CNN/Daily Mail reading comprehension task\" by Chen, Bolton and Manning."], "labels": ["fact", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "fact", "reference"]} {"doc_id": "BJovaI9gf", "text": ["This is an interesting paper that builds a parameterized network to select actions for a robot in a simulated environment, with the objective of quickly reaching an internal belief state that is predictive of the true state. ", "This is an interesting idea ", "and it works much better than I would have expected. ", "In more careful examination it is clear that the authors have done a good job of designing a network that is partly pre-specified and partly free, in a way that makes the learning effective. ", "In particular- the transition model is known and fixed (in the way it is used in the belief update process)", "- the belief state representation is known and fixed (in the way it is used to decide whether the agent should be rewarded)", "- the reward function is known and fixed (as above)", "- the mechanics of belief update", "But we learn - the observation model", "- the control policy", "I'm not sure that global localization is still an open problem with known models. ", "Or, at least, it's not one of our worst.", "Early work by Cassandra, Kurien, et al used POMDP models and solvers for active localization with known transition and observation models. ", "It was computationally slow but effective.", "Similarly, although the online speed of your learned method is much better than for active Markov localization, the offline training cost is dramatically higher; ", "it's important to remember to be clear on this point.", "It is not obvious to me that it is sensible to take the cosine similarity between the feature representation of the observation and the feature representation of the state to get the entry in the likelihood map. ", "It would be good to make it clear this is the right measure.", "How is exploration done during the RL phase? ", "These domains are still not huge.", "Please explain in more detail what the memory images are doing.", "In general, the experiments seem to be well designed and well carried out, with several interesting extensions.", "I have one more major concern: it is not the job of a localizer to arrive at a belief state with high probability mass on the true state", "---it is the job of a localizer to have an accurate approximation of the true posterior under the prior and observations. ", "There are situations (in which, for example, the robot has gotten an unusual string of observations) in which it is correct for the robot to have more probability mass on a \"wrong\" state. ", "Or, it seems that this model may earn rewards for learning to make its beliefs overconfident. ", "It would be very interesting to see if you could find an objective that would actually cause the model to learn to compute the appropriate posterior.", "In the end, I have trouble making a recommendation:", "Con: I'm not convinced that an end-to-end approach to this problem is the best one", "Pro: It's actually a nice idea that seems to have worked out well", "Con: I remain concerned that the objective is not the right one", "My rating would really be something like 6.5 if that were possible."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg"]} {"doc_id": "S1skfxRxM", "text": ["SUMMARY The paper considers the problem of using cycle GANs to decipher text encrypted with historical ciphers.", "Also it presents some theory to address the problem that discriminating between the discrete data and continuous prediction is too simple.", "The model proposed is a variant of the cycle GAN in which in addition embeddings helping the Generator are learned for all the values of the discrete variables.", "The log loss of the GAN is replaced by a quadratic loss and a regularization of the Jacobian of the discriminator.", "Experiments show that the method is very effective.", "REVIEW The paper considers an interesting and fairly original problem", "and the overall discussion of ciphers is quite nice.", "Unfortunately, my understanding is that the theory proposed in section 2 does not correspond to the scheme used in the experiments", "(contrarily to what the conclusion suggest and contrarily to what the discussion of the end of section 3, which says that using embedding is assumed to have an equivalent effect to using the methodology considered in the theoretical part).", "Another important concern is with the proof: there seems to be an unmotivated additional assumption that appears in the middle of the proof of Proposition 1", "+ some steps need to be clarified (see comment 16 below).", "The experiments do not have any simple baseline, which is somewhat unfortunate.", "DETAILED COMMENTS: 1- The paper makes a few bold and debatable statements:", "line 9 of section 1 \"Such hand-crafted features have fallen out of favor (Goodfellow et al., 2016) as a result of their demonstrated inferiority to features learned directly from data in end-to-end learning frameworks such as neural networks\"", "This is certainly an overstatement", "and although it might be true for specific types of inputs it is not universally true,", "most deep architectures rely on a human-in-the-loop", "and there are number of areas where human crafted feature are arguably still relevant, if only to specify what is the input of a deep network:", "there are many domains where the notion of raw data does not make sense, and, when it does, it is usually associated with a sensing device that has been designed by a human and which implicitly imposes what the data is based on human expertise.", "2- In the last paragraph of the introduction, the paper says that previous work has only worked on vocabularies of 26 characters while the current paper tackles word level ciphers with 200 words.", "But, isn't this just a matter of scalability and only possible with very large amounts of text?", "Is it really because of an intrinsic limitation or lack of scalability of previous approaches or just because the authors of the corresponding papers did not care to present larger scale experiments?", "3- The discussion at the top of page 5 is difficult to follow.", "What do you mean when you say \"this motivates the benefits of having strong curvature globally, as opposed to linearly between etc\"", "Which curvature are we talking about?", "and what how does the \"as opposed to linearly\" mean?", "Should we understand \"as opposed to having curvature linearly interpolated between etc\" or \"as opposed to having a linear function\"?", "Please clarify.", "4- In the same paragraph: what does \"a region that has not seen the Jacobian norm applied to it\" mean?", "How is a norm applied to a region?", "I guess that what you mean is that the generator G might creates samples in a part of the space where the function F has not yet been learned and is essentially close to 0.", "Is this what you mean?", "5- I do not understand why the paper introduces WGAN", "since in the end it does not use them but uses a quadratic loss, introduced in the first display of section 4.3.", "6- The paper makes a theoretical contribution which supports replacing the sample y by a sample drawn from a region around y.", "But it seems that this is not used in the experiment", "and that the authors consider that the introduction of the embedding is a substitution for this.", "Indeed, in the last paragraph of section 3.1, the paper says \"we make the assumption that the training of the embedding vectors approximates random sampling similar to what is described in Proposition 1\".", "This does not make any sense to me", "because the embedding vectors map each y deterministically to a single point,", "and so the distribution on the corresponding vectors is still a fixed discrete distribution.", "This gives me this impression that the proposed theory does not match what is used in the experiments.", "(The last sentence of section 3.1, which is commenting on this and could perhaps clarify the situation is ill formed with two verbs.)", "7- In the definitions: \"A discriminator is said to perform uninformative discrimination\" etc.", "-> It seems that the choice of the word uninformative would be misleading:", "an uninformative discrimination would be a discrimination that completely fails, while what the condition is saying it that it cannot perform perfect discrimination.", "I would thus suggest to call this \"imperfect discrimination\".", "8- It seems that the same embedding is used in X space and in Y space (from equations 6 and 7).", "Is there any reason for that?", "I would seem more natural to me to introduce two different embeddings", "since the objects are a priori different...", "Actually I don't understand how the embeddings can be the same in the Vignere code case", "since time taken into account one one side.", "9- On the 5th line after equation (7), the paper says \"the embeddings... are trained to minimize L_GAN and L_cyc, meaning... and are easy to discriminate\"", "-> This last part of the sentence seems wrong to me.", "The discriminator is trying to maximize L_GAN", "and so minimizing w.r.t. to the embedding is precisely trying to prevent to the discriminator to tell apart too easily the true elements from the estimated ones.", "In fact the regularization of the Jacobian that will be preventing the discriminator to vary too quickly in space is more likely to explain the fact that the discrimination is not too easy to do between the true and mapped embeddings.", "This might be connected to the discussion at the top of page 5.", "Since there are no experiments with alpha different than the default value = 10,", "this is difficult to assess.", "10-The Vigenere cipher is explained again at the end of section 4.2 when it has already been presented in section 1.1", "11- Concerning results in Table 2: I do not see why it would not be possible to compare the performance of the method with classical frequency analysis, at least for the character case.", "12- At the beginning of section 4.3, the text says that the log loss was replaced with the quadratic loss, but without giving any reason.", "Could you explain why.", "13- The only comparison of results with and without embeddings is presented in the curves of figure 3, for Brown-W with a vocabulary of 200 words.", "In that case it helps.", "Could the authors report systematically results about all cases?", "(I guess this might however be the only hard case...)", "14- It would be useful to have a brief reminder of the architecture of the neural network", "(right now the reader is just refered to Zhu et al., 2017):", "how many layers, how many convolution layers etc.", "The same comment applies for the way the position of the letter/word in the text appear is in encoded in a feature that is provided as input to the neural network: it would be nice if the paper could provide a few details here and be more self contained.", "(The fact that the engineering of the time feature can \"dramatically\" improve the performance of the network should be an argument to convince the authors that hand-crafted feature have not fallen out of favor completely yet...)", "15- I disagree with the statement made in the conclusion that the proposed work \"empirically confirms [...] that the use of continuous relaxation of discrete variable facilitates [...] and prevents [...]\"", "because for me the proposed implementation does not use at all the theoretical idea of continuous relaxation proposed in the paper, unless there is a major point that I am missing.", "16- I have two issues with the proof in the appendix", "a) after the first display of the last page the paper makes an additional assumption which is not announced in the statement of the theorem, which is that two specific inequality hold...", "Unless I am mistaken this assumption is never proven (later or earlier).", "Given that this inequality is just \"the right inequality to get the proof go through\"", "and given that there are no explanation for why this assumption is reasonable, to me this invalidates the proof.", "The step of going from G(S_y) to S_(G(y)) seems delicate...", "b) If we accept these inequalities, the determinant of the Jacobian (the notation is not defined) of F at (x_bar) disappears from the equations, as if it could be assumed to be greater than one.", "If this is indeed the case, please provide a justification of this step.", "17- A way to address the issue of trivial discrimination in GANs with discrete data has been proposed in Luc, P., Couprie, C., Chintala, S., & Verbeek, J. (2016). Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408.", "The authors should probably reference this paper.", "18- Clarification of the Jacobian regularization: in equation (3), the Jacobian computed seems to be w.r.t D composed with F", "while in equation (8) it is only the Jacobian of D.", "Which equation is the correct one?", "TYPOS: Proposition 1: the if-then statement is broken into two sentences separated by a full point and a carriage return.", "sec. 4.3 line 10 we use a cycle loss *with a regularization coefficient* lambda=1 (a piece of the sentence is missing)", "sec. 4.3 lines 12-13 the learning rates given are the same at startup and after \"warming up\"...", "In the appendix: 3rd line of proof of prop 1: I don' understand \"countably infinite finite sequences of vectors lying in the vertices of the simplex\"", "-> what is countable infinite here?", "The vertices?"], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "quote", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "quote", "evaluation", "fact", "fact", "evaluation", "evaluation", "quote", "evaluation", "fact", "request", "fact", "request", "evaluation", "fact", "evaluation", "fact", "quote", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "request", "fact", "request", "fact", "evaluation", "request", "evaluation", "request", "fact", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "request", "fact", "fact", "request", "fact", "fact", "fact", "evaluation", "request", "request"]} {"doc_id": "H15HuyMlz", "text": ["This paper proposes an importance-weighted estimator of the MMD, in order to estimate the MMD between distributions based on samples biased according to a known scheme. ", "It then discusses how to estimate the scheme when it is unknown, and further proposes using it in either the MMD-based generative models of Y. Li et al. (2015) / Dziugaite et al. (2015), or in the MMD GAN of C.-L. Li et al. (2017).", "The estimator itself is natural (and relatively obvious), ", "though it has some drawbacks that aren't fully discussed (below).", "The application to GAN-type learning is reasonable, and topical. ", "The first, univariate, experiment shows that the scheme is at least plausible. ", "But the second experiment, involving a simple T ratio based on whether an MNIST digit is a 0 or a 1, doesn't even really work! ", "(The best model only gets the underrepresented class from 20% up to less than 40%, rather than the desired 50%, and the \"more realistic\" setting only to 33%.)", "It would be helpful to debug whether this is due to the classifier being incorrect, estimator inaccuracies, or what. ", "In particular, I would try using T based on a pretrained convnet independent of the autoencoder representation in the MMD GAN, to help diagnose where the failure mode comes from.", "Without at least a working should-be-easy example like this, and with the rest of the paper's technical contribution so small, I just don't think this paper is ready for ICLR.", "It's also worth noting that the equivalent algorithm for either vanilla GANs or Wasserstein GANs would be equally obvious.", "Estimator: In the discussion about (2): where does the 1/m bias come from? ", "This doesn't seem to be in Robert and Casella section 3.3.2, which is the part of the book that I assume you're referring to ", "(incidentally, you should specify that rather than just citing a 600-page textbook).", "Moreover, it is worth noting that Robert and Cassela emphasize that if E[1 / \\tilde T] is infinite, the importance sampling estimator can be quite bad (for example, the estimator may have infinite variance). ", "This happens when \\tilde T puts mass in a neighborhood around 0, i.e. when the thinned distribution doesn't have support at any place that P does. ", "In the biased-observations case, this is in some sense unsurprising: ", "if we don't see *any* data in a particular class of inputs, then our estimates can be quite bad ", "(since we know nothing about a group of inputs that might strongly affect the results). ", "In the modulating case, the equivalent situation is when F(x) lacks a mean, ", "which seems less likely. ", "Thus although this is probably not a huge problem for your case, ", "it's worth at least mentioning. ", "(See also the following relevant blog posts: ", "https://radfordneal.wordpress.com/2008/08/17/the-harmonic-mean-of-the-likelihood-worst-monte-carlo-method-ever/ ", "and https://xianblog.wordpress.com/2012/03/12/is-vs-self-normalised-is/ .)", "The paper might be improved by stating (and proving) a theorem with expressions for the rate of convergence of the estimator, and how they depend on T.", "Minor: Another piece of somewhat-related work is Xiong and Schneider, Learning from Point Sets with Observational Bias, UAI 2014.", "Sutherland et al. 2016 and 2017, often referenced in the same block of citations, are the same paper.", "On page 3, above (1): \"Since we have projected the distributons into an infinite-dimensional space, the distance between the two distributions is zero if and only if all their moments are the same.\" ", "An infinite-dimensional space isn't enough; ", "the kernel must further be characteristic, as you mention. ", "See e.g. Sriperumbuder et al. (AISTATS 2010) for more details.", "Figure 1(b) seems to be plotting only the first term of \\tilde T, without the + 0.5."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "evaluation", "evaluation", "request", "fact", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "reference", "reference", "request", "evaluation", "fact", "quote", "fact", "fact", "reference", "evaluation"]} {"doc_id": "rke8ggtxG", "text": ["This paper discusses using neural networks for super-resolution.", "The positive aspects of this work is that the use of two neural networks in tandem for this task may be interesting,", "and the authors attempt to discuss the network's behavior by drawing relations to successful sparsity-based super-resolution.", "Unfortunately I cannot see any novelty in the relationship the authors draw to LASSO style super-resolution and dictionary learning beyond what is already in the literature (see references below), including in one reference that the authors cite.", "In addition, there are a number of sloppy mistakes (e.g. Equation 10 as a clear copy-paste error) in the manuscript.", "Given that much of the main result seems to already be known,", "I feel that this work is not novel enough at this time.", "Some other minor points for the authors to consider for future iterations of this work: - The authors mention the computational burden of solving L1-regularized optimizations.", "A lat of work has been done to create fast, efficient solvers in many settings (e.g. homotopy, message passing etc.).", "Are these methods still insufficient in some applications?", "If so, which applications of interest are the authors considering?", "- In figure 1, it seems that under \"superresolution problem\": 'f' should be 'High res data' and 'g' should be 'Low res data' instead of what is there.", "I'm also not sure how this figure adds to the information already in the text.", "- In the results, the authors mention how some network features represented by certain neurons resemble the training data.", "This seems like over-training and not a good quality for generalization.", "The authors should clarify if, and why, this might be a good thing for their application.", "- Overall a heavy editing pass is needed to fix a number of typos throughout.", "References: [1] K. Gregor and Y. LeCun , \u201cLearning fast approximations of sparse coding,\u201d in Proc. Int. Conf. Mach. Learn., 2010, pp. 399\u2013406.", "[2] P. Sprechmann, P. Bronstein, and G. Sapiro, \u201cLearning efficient structured sparse models,\u201d in Proc. Int. Conf. Mach. Learn., 2012, pp. 615\u2013622.", "[3] M. Borgerding, P. Schniter, and S. Rangan, ``AMP-Inspired Deep Networks for Sparse Linear Inverse Problems [pdf] [arxiv],\" IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4293-4308, Aug. 2017.", "[4] V. Papyan*, Y. Romano* and M. Elad, Convolutional Neural Networks Analyzed via Convolutional Sparse Coding, accepted to Journal of Machine Learning Research, 2016."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "request", "request", "request", "evaluation", "fact", "evaluation", "request", "request", "reference", "reference", "reference", "reference"]} {"doc_id": "r1Ps9aPez", "text": ["This paper discusses a text-to-speech system which is based on a convolutional attentive seq2seq architecture. ", "It covers experiments on a few datasets, testing the model's ability to handle increasing numbers of speakers.", "By and large, this is a \"system\" paper ", "- it mostly describes the successful application of many different existing ideas to an important problem (with some exceptions, e.g. the novel method of enforcing monotonic alignments during inference). ", "In this type of paper, I typically am most interested in hearing about *why* a particular design choice was made, what alternatives were tried, and how different ideas worked. ", "This paper is lacking in this regard ", "- I frequently was left looking for more insight into the particular system that was designed. ", "Beyond that, I think more detailed description of the system would be necessary in order to reimplement it suitably ", "(another important potential takeaway for a \"system\" paper). ", "Separately, I the thousands-of-speakers results are just not that impressive ", "- a MOS of 2 is not really useable in the real-world. ", "For that reason, I think it's a bit disingenuous to sell this system as \"2000-Speaker Neural Text-to-Speech\".", "For the above reasons, I'm giving the paper a \"marginally above\" rating. ", "If the authors provide improved insight, discussion of system specifics, and experiments, I'd be open to raising my review. ", "Below, I give some specific questions and suggestions that could be addressed in future drafts.", "- It might be worth giving a sentence or two defining the TTS problem ", "- the paper is written assuming background knowledge about the problem setting, including different possible input sources, what a vocoder is, etc. ", "The ICLR community at large may not have this domain-specific knowledge.", "- Why \"softsign\" and not tanh? ", "Seems like an unusual choice.", "- What do the \"c\" and \"2c\" in Figure 2a denote?", "- Why scale (h_k + h_e) by \\sqrt{0.5} when computing the attention value vectors?", "- \"An L1 loss is computed using the output spectrograms\" ", "I assume you mean the predicted and target spectrograms are compared via an L1 loss. ", "Why L1?", "- In Vaswani et al., it was shown that a learned positional encoding worked about as well as the sinusoidal position encodings despite being potentially more flexible/less \"hand-designed\" for machine translation. ", "Did you also try this for TTS? ", "Any insight?", "- Some questions about monotonic attention: Did you use the training-time \"soft\" monotonic attention algorithm from Raffel et al. during training and inference, or did you use the \"hard\" monotonic attention at inference time? ", "IIUC the \"soft\" algorithm doesn't actually force strict monotonicity. ", "You wrote \"monotonic attention results in the model frequently mumbling words\", ", "can you provide evidence/examples of this? ", "Why do you think this happens? ", "The monotonic attention approach seems more principled than post-hoc limiting softmax attention to be monotonic, ", "why do you think it didn't work as well?", "- I can't find an actual reference to what you mean by a \"wavenet vocoder\". ", "The original wavenet paper describes an autoregressive model for waveform generation. ", "In order to use it as a vocoder, you'd have to do conditioning in some way. ", "How? ", "What was the structure of the wavenet you used? ", "Why? ", "These details appear to be missing. ", "All you write is the sentence (which seems to end without a period) ", "\"In the WaveNet vocoder, we use mel-scale spectrograms from the decoder to condition a Wavenet, which was trained separated\".", "- Can you provide examples of the mispronunciations etc. which were measured for Table 1? ", "Was the evaluation of each attention mechanism done blindly?", "- The 2.07 MOS figure produced for tacotron seems extremely low, ", "and seems to indicate that something went wrong or that insufficient care was taken to report this baseline. ", "How did you adapt tacotron (which as I understand is a single-speaker model) to the multi-speaker setting?", "- Table 3 begs the question of whether Deep Voice 3 can outperform Deep Voice 2 when using a wavenet vocoder on VCTK (or improve upon the poor 2.09 MOS score reported). ", "Why wasn't this experiment run?", "- The paragraph and appendix about deploying at scale is interesting and impressive, ", "but seems a bit out of place ", "- it probably makes more sense to include this information in a separate \"systems\" paper."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "non-arg", "request", "evaluation", "evaluation", "request", "evaluation", "request", "request", "quote", "evaluation", "request", "fact", "request", "request", "request", "fact", "quote", "request", "request", "evaluation", "request", "evaluation", "fact", "fact", "request", "request", "request", "fact", "fact", "quote", "request", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "request"]} {"doc_id": "rk4VD6Ymf", "text": ["This work proposes DIVE (distributional inclusion vector embeddings), an unsupervised method for hypernymy discovery that preserves the inclusion property of old fashioned sparse BOW representations (i.e., that hypernyms tend to have higher counts in more varies contexts than hyponyms). ", "The learned representations are evaluated on a large set of datasets and are shown to work well. ", "The work is very thorough, and almost reads as a systematic study in places. ", "Unfortunately, this makes it hard to follow sometimes, ", "and although the work is generally interesting, ", "I am left to wonder about its novelty and general impact on the field.", "The main proposition essentially comes down to a slight tweak in word2vec's SGNS method: ", "instead of shifting the PMI value of word co-occurrences by 1/k, we shift by #w/(k*|D|/|V|) and add a non-negativity constraint, i.e., we explicitly force Eq. 2 to be true in perfect reconstructions. ", "This makes sense and I am surprised nobody has done it before. ", "It feels like a minor tweak, ", "however, and any major improvement seems to be largely dependent on the scoring function, rather than on this slightly different objective. ", "Consequently, I feel like the paper is in two minds about what it wants to be: a systematic study of hypernymy detection methods, or an introduction of a novel algorithm for learning distributional inclusion embeddings.", "I like the main ideas of this work and I think it's great that the study is so thorough, ", "but I don't think it should be accepted in its current form. ", "Main concerns:- Presentation: the experiments show that (1) DIVE outperforms GE, HyperScore and H-Feature baselines; ", "(2) which scoring function works best for DIVE; ", "(3) DIVE outperforms SBOW in many cases, but not always, though better on average; ", "and (4) we can use DIVE for WSD (only shown qualitatively). ", "The paper spans 13 pages, excluding appendices, ", "which is rather long. ", "I feel that (2), (3) and (4) are part of a systematic study/review paper, ", "while (1) could be interesting in and of itself, if it included a full comparison against alternative methods. ", "The way results are presented in the tables is confusing, ", "and it's unclear why only one baseline is included in each case.", "- Comparison: it appears that there are methods missing from the results tables. ", "On HyperLex, for example, results as high as 0.512 (Poincare embeddings), 0.540 (HyperVec) and 0.686 (LEAR) have been reported, ", "while the highest result in the paper is 34.5. ", "That's a big difference. ", "On Weeds' BLESS, it's 68.6 versus e.g. 0.75 for Kiela et al. (unsupervised, using images) and 0.850 for HyperVec. ", "If these results were omitted due to space, I think experiment (2) and (4) can safely be moved to the appendix. ", "Especially for something that hinges on being a review paper, such as this work, it is important to be complete.", "In short, I think the presentation doesn't help; ", "I am left to wonder what the main contribution is of the work; ", "and I think the comparison to previous work is incomplete.", "Questions:- Why use WaCkypedia? ", "It's old and, by now, small. ", "It would be interesting to try all scoring functions and types of model (PPMI, PPMI with shift, PPMI with inclusion shift) trained on the exact same corpus and same negatives, and showing that it works best there.", "- General question: Is Vendrov's test set only yes/no, e.g. if I set the threshold really low, I get 100% accuracy, or does it contain negatives? ", "Same for BLESS. ", "If so how valuable is this evaluation?", "- To what extent are the baselines tuned in the semi-supervised case? ", "If the numbers are from papers, that should be mentioned. ", "It would be better to use the same corpus, and the same amount of attention to tuning the results, for both cases.", "Minor/Typos:- Medicl", "- \"From the recent review (Santus et al) ... suggested by the review study (Vulic et al)\" this way of citing reviews isn't very pretty"], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "request", "non-arg", "non-arg", "evaluation", "request", "request", "request", "request", "evaluation"]} {"doc_id": "SkNXYkbkz", "text": ["This paper introduces an algorithm for learning Wasserstein GANs for discrete distributions. ", "Taking the dual form of the discrete Wasserstein distance introduced by [Evans 1997] (which produces a constrained optimization problem) and using this as a basis of a GAN training algorithm. ", "The key algorithmic distinction from conventional GAN approaches is that the critic takes pairs of real and simulated datapoints as inputs and returns a measure of which it thinks is the real datapoint, namely f(x_r)\u2013f(g(z)) where x_r is the data point and g(z) the generator output, rather than the critic corresponding to f directly. ", "An architecture is proposed for this framework that guarantees that the constraints from the formulation of [Evans 1997] as satisfied.", "The topic of this paper is timely and of clear interest to the ICLR community. ", "The underlying idea is interesting ", "and the architecture seems appropriate for the chosen target. ", "Further, the paper is relatively clear and easy to follow. ", "However, the experimental evaluation is not sufficiently strenuous and produces very underwhelming results. ", "Relatedly, I think the motivation behind using the discrete Wasserstein distance, though seemingly reasonable, needs more careful consideration. ", "Unfortunately, the serious shortfalls in the experiments mean that the paper, in my opinion, falls noticeably below the acceptance threshold for ICLR. ", "Having said that, my stance might change substantially if more impressive experimental results can be obtained; ", "without this, I am unconvinced the method actually behaves as intended. ", "Regretfully, this is probably beyond the scope of a revision during the rebuttal period ", "though as it will most likely require significant algorithmic changes.", "%%%% Shortcomings with Experiments %%%%% Put simply, I do not think that the experiments demonstrate that the approach works and actually suggest the opposite. ", "The so-called \u201cmode collapse\u201d issue is effectively a sugar-coated way of saying that the method has learned to return a single output rather than learning a generative model. ", "This is a huge issue and needs sorting before the paper can be seriously considered for publication. ", "The attempted at a fix at the end of the paper might be a step in the right direction but is not evaluated sufficiently ", "and the preliminary results provided are not particularly promising.", "Going past this issue, there are still a lot of problems with the experimental evaluation. ", "More numerous and more difficult problems need to be considered. ", "For example, an NLP problem would fit well with the motivation for the approach given in the intro. ", "It is also necessary to demonstrate that the GAN is doing something different to just memorizing previous examples. ", "For example, this is a problem where one can actually reasonably compare to just sampling from the data by checking that more unseen correct samples are generated then incorrect samples. ", "It is worrying that the convergence plots show a single line that ostensibly comes from the \u201cbest result\u201d. ", "This is not a reasonable scientific comparison method ", "and should be replaced by mean (or median) performance with uncertainty estimates (i.e. +/- some numbers of standard deviations or quantiles). ", "The small number of samples from the MNIST problem really doesn\u2019t convey anything meaningful ", "\u2013 even if this looked exceptional, a small number of unqualified samples is hardly a serious evaluation metric.", "More iterations for Figure 4a or needed to see if the WGAN keeps improving beyond on the DWGAN, noting that this has not converged at 100%. ", "All four methods should be added to Figure 5.", "%%%% Is the Discrete Metric Always Better for Training the Generative Model? %%%%%", "Though I think it is probably true that the discrete metric should more beneficial from the point of the view of the critic, ", "I am not completely convinced it is always beneficial from the point of view of training the generator, which is at the end of the day what really matters. ", "Note that I am not trying to argue that discrete metric is worse, ", "just that you haven\u2019t done enough to convince me that it is always better. ", "If indeed it is always better, please tear apart my following argument, ", "which will hopefully provide stimulation for improving the motivation of the metric in the paper. ", "If it is not always better, the paper should be updated to outline some of the potential pathologies and the cases were you expect the approach to work well and when you do not.", "To demonstrate my argument, consider the training sample [0, 0, 1; 0, 1, 0] and the following four example generated outputs (1) [0.333, 0.333, 0.334; 0.333, 0.334, 0.333] (2) [0, 0, 1; 0, 1, 0] (3) [0, 0, 1; 0.334, 0.333, 0.333] (4) [0.333, 0.334, 0.333; 0.334, 0.333, 0.333] giving respective discrete distances to of 0, 0, 1, and 2; and continuous distances of 1.330668, 0, 0.667334, and 1.334668. ", "Now imagine we are training a GAN with the one example datapoint [0, 0, 1; 0, 1, 0]. ", "Even though (1) will lead to the target sample [0, 0, 1; 0, 1, 0] after passing through the argmax function and (3) will not, ", "(1) is also very close to (4) which has the maximum possible discrete distance. ", "Consequently, the generator for (1) is most likely not at a stable optimum and very close in the space of neural net weights to some very poor generators. ", "During training, we would thus like to guide our network towards generating (2), ", "as this is a stable solution, particularly given we are using stochastic optimization methods. ", "From this perspective, then (3) is perhaps a better generated output than (1), a fact conveyed by the continuous metric but not the discrete metric. ", "In other words, even though (1) is arguably a better final solution than (3), from the perspective of effective training, it may be favorable to use a target function that prefers (3) to (1) to better guide the training to a stable solution.", "Once we consider the fact that our aim is not to replicate a single datapoint but learn a generator that in some way interpolates between datapoints, ", "it becomes even less clear if the discrete metric is better. ", "For example, small changes to the input z for (1) are likely to lead to generating samples that are very different to anything seen in the training data (which is, in this case, a single point).", "Though I do not think this argument undermines the suggested approach, ", "I do think it highlights why the supplied motivation for the approach is insufficient and needs more explicitly linking back to the training procedure. ", "At the very least, I think the above argument shows why it is not immediately clear cut that the suggested approach will perform better ", "and so needs backing up with strong empirical evidence, ", "which unfortunately the paper does not currently have, and/or a more convincing argument for why the discrete metric is better.", "%%%% Other points %%%% - Tables 1, 2, and 3 are not worth spending more than a few lines on, let alone nearly a page. ", "They should be substantially compressed or preferably just cut (particularly Tables 2 and 3). ", "The point they convey is obvious and actually somewhat tangential to the key questions.", "- It worries me that the method learns an h that explicitly takes y as input, not x. ", "This is a notational issue in the exposition, but also, more importantly, raises questions about whether the original linear programming problem is actually being solved ", "because the inputs from the generative method are not discrete variables.", "- It should be made clear that the Appendices are directly following the derivation of Evans 1997, rather than being a new derivation.", "- At times the paper is little sloppy at making distinguishing between y \u2208 [0, 1] and y \u2208 {0, 1}. ", "This makes it a bit confusing at times what is output from the generator, particularly when you talk about it being one-hot encoded. ", "As I understand it, the output is always the former, ", "while data lives in the latter ", "but this is not always clear. ", "I think you should also make a bigger deal of this in terms of the problem with previous methods that ignore that the critic might be able to distinguish based on the fact that the training and generated data points have an explicitly different type (discrete and continuous respectively).", "- On the fourth line of the abstract, both instances of GAN should be GANs.", "- What do you mean \u201cexplained in the sequel\u201d?", "- Does k need to be the same for each variable?", "%%%% References %%%% Lawrence C Evans. Partial differential equations and Monge-Kantorovich mass transfer. Current developments in mathematics, 1997(1):65\u2013126, 1997."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "request", "request", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "request", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "request", "request", "request", "reference"]} {"doc_id": "ryCv5QFgz", "text": ["Motivated via Talor approximation of the Residual network on a local minima, this paper proposed a warp operator that can replace a block of a consecutive number of residual layers. ", "While having the same number of parameters as the original residual network, the new operator has the property that the computation can be parallelized. ", "As demonstrated in the paper, this improves the training time with multi-GPU parallelization, while maintaining similar performance on CIFAR-10 and CIFAR-100.", "One thing that is currently not very clear to me is about the rotational symmetry. ", "The paper mentioned rotated filters, ", "but continue to talk about the rotation in the sense of an orthogonal matrix applying to the weight matrix of a convolution layer. ", "The rotation of the filters (as 2D images or images with depth) seem to be quite different from \"rotating\" a general N-dim vectors in an abstract Euclidean space. ", "It would be helpful to make the description here more explicit and clear."], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "evaluation", "request"]} {"doc_id": "H1aEoGAgG", "text": ["The authors provide a novel, interesting, and simple algorithm capable of training with limited memory. ", "The algorithm is well-motivated and clearly explained, ", "and empirical evidence suggests that the algorithm works well. ", "However, the paper needs additional examination in how the algorithm can deal with larger data inputs and outputs. ", "Second, the relationship to existing work needs to be explained better.", "Pro: The algorithm is clearly explained, well-motivated, and empirically supported.", "Con: The relationship to stochastic gradient markov chain monte carlo needs to be explained better. ", "In particular, the update form was first introduced in [1], ", "the annealing scheme was analyzed in [2], ", "and the reflection step was introduced in [3]. ", "These relationships need to be explained clearly.", "The evidence is presented on very small input data. ", "With something like natural images, the parameterization is much larger and with more data, the number of total parameters is much larger. ", "Is there any evidence that the proposed algorithm could continue performing comparatively as the total number of parameters in state-of-the-art networks increases? ", "This would require a smaller ratio of included parameters.", "[1] Welling, M. and Teh, Y.W., 2011. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11)(pp. 681-688).", "[2] Chen, C., Carlson, D., Gan, Z., Li, C. and Carin, L., 2016, May. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In Artificial Intelligence and Statistics(pp. 1051-1060).", "[3] Patterson, S. and Teh, Y.W., 2013. Stochastic gradient Riemannian Langevin dynamics on the probability simplex. In Advances in Neural Information Processing Systems (pp. 3102-3110)."], "labels": ["evaluation", "evaluation", "evaluation", "request", "request", "evaluation", "request", "fact", "fact", "fact", "request", "evaluation", "evaluation", "request", "evaluation", "reference", "reference", "reference"]} {"doc_id": "Byk2NS9xf", "text": ["This paper proposes a method to generate adversarial examples for text classification problems. ", "They do this by iteratively replacing words in a sentence with words that are close in its embedding space and which cause a change in the predicted class of the text. ", "To preserve correct grammar, they only change words that don't significantly change the probability of the sentence under a language model.", "The approach seems incremental and very similar to existing work such as Papernot et. al. ", "The paper also states in the discussion in section 5.1 that they generate adversarial examples in state-of-the-art models, ", "however, they ignore some state of the art models entirely such as Miyato et. al.", "The experiments are solely missing comparisons to existing text adversarial generation approaches such as Papernot et. al and a comparison to adversarial training for text classification in Miyato et. al which might already mitigate this attack. ", "The experimental section also fails to describe what kind of language model is used, ", "(what kind of trigram LM is used? ", "A traditional (non-neural) LM? ", "Does it use backoff?).", "Finally, algorithm 1 does not seem to enforce the semantic constraints in Eq. 4 despite it being mentioned in the text. ", "This can be seen in section 4.5 where the algorithm is described as choosing words that were far in word vector space. ", "The last sentence in section 6 is also unfounded.", "Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z.Berkay Celik, and Ananthram Swami Practical Black-Box Attacks against Machine Learning. Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security", "Takeru Miyato, Andrew M. Dai and Ian Goodfellow Adversarial Training Methods for Semi-Supervised Text Classification. International Conference on Learning Representation (ICLR), 2017"], "labels": ["fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "request", "request", "request", "fact", "fact", "fact", "reference", "reference"]} {"doc_id": "Byyu-H4-f", "text": ["The authors present an end to end training of a CNN architecture that combines CT image signal processing and image analysis. ", "This is an interesting paper. ", "Time will tell whether a disease specific signal processing will be the future of medical image analysis, ", "but - to the best of my knowledge - this is one of the first attempts to do this in CT image analysis, ", "a field that is of significance both to researchers dealing with image reconstruction (denoising, etc.) and image analysis (lesion detection). ", "As such I would be positive about the topic of the paper and the overall innovation it promises both in image acquisition and image processing, ", "although I would share the technical concerns pointed out by Reviewer2, ", "and the authors would need good answers to them before this study would be ready to be presented."], "labels": ["fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation"]} {"doc_id": "BkWlPOFlM", "text": ["The authors present confidence-based autodidactic returns, a Deep learning RL method to adjust the weights of an eligibility vector in TD(lambda)-like value estimation to favour more stable estimates of the state. ", "The key to being able to learn these confidence values is to not allow the error of the confidence estimates propagate back though the deep learning architecture.", "However, the method by which these confidence estimates are refined could be better described. ", "The authors describe these confidences variously as: \"some notion of confidence that the agent has in the value function estimate\" and \"weighing the returns based on a notion of confidence has been explored earlier (White & White, 2016; Thomas et al., 2015)\". ", "But the exact method is difficult to piece together from what is written. ", "I believe that the confidence estimates are considered to be part of the critic ", "and the w vector to be part of the theta_c parameters. ", "This would then be captured by the critic gradient for the CAR method that appears towards the end of page 5. ", "If so, this should be stated explicitly.", "There is another theoretical point that could be clearer. ", "The variation in an autodidactic update of a value function (Equation (4)) depends on a few things, ", "the in variation future value function estimates themselves being just one factor. ", "Another two sources of variation are: the uncertainty over how likely each path is to be taken, and the uncertainty in immediate rewards accumulated as part of some n-step return. ", "In my opinion, the quality of the paper would be much improved by a brief discussion of this, ", "and some reflection on what aspects of these variation contribute to the confidence vectors and what isn't captured.", "Nonetheless, I believe that the paper represents an interesting and worthy submission to the conference. ", "I would strongly urge the authors to improve the method description in the camera read version though. ", "A few additional comments are as follows: \u2022 The plot in Figure 3 is the leading collection of results to demonstrate the dominance of the authors' adaptive weight approach (CAR) over the A3C (TD(0) estimates) and LRA3C (truncated TD(lambda) estimates) approaches. ", "However, the way the results are presented/plotted, namely the linear plot of the (shifted) relative performance of CAR (and LRA3C) versus A3C, visually inflates the importance of tasks on which CAR (and LRA3C) perform better than A3C, and diminishes the importance of those tasks on which A3C performs better. ", "It would be better kept as a relative value and plotted on a log-scale so that positive and negative improvements can be viewed on an equal setting.", " \u2022 On page 3, when Gt is first mentioned, Gt should really be described first, before the reader is told what it is often replaced with.", " \u2022 On page 3, where delta_t is defined (the j step return TD error, I think the middle term should be $gamma^j V(S_{t+j})$", " \u2022 On page 4 and 5, when describing the gradient for the actor and critic, it would be better if these were given their own terminology, but if not, then use of the word respectively in each case would help."], "labels": ["fact", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "fact", "fact", "fact", "request", "request", "evaluation", "request", "evaluation", "evaluation", "request", "request", "request", "request"]} {"doc_id": "S1MHyoFgf", "text": ["Thank you for your contribution to ICLR. ", "The paper covers a very interesting topic and presents some though-provoking ideas. ", "The paper introduces \"covariant compositional networks\" with the purpose of learning graph representations. ", "An example application also covered in the experimental section is graph classification. ", "Given a finite set S, a compositional network is simply a partially ordered set P where each element of P is a subset of S and where P contains all sets of cardinality 1 and the set S itself. ", "Unfortunately, the presentation of the approach is extremely verbose and introduces old concepts (e.g., partially ordered set) under new names. ", "The basic idea (which is not new) of this work is that we need to impose some sort of hierarchical order on the nodes of the graph so as to learn hierarchical feature representations. ", "Moreover, the hierarchical order of the nodes should be invariant to valid permutations of the nodes, that is, two isomorphic graphs should have the same hierarchical order on their nodes and the same feature representations. ", "Since this is the case for graph embedding methods that collect feature representations from their neighbors in the graph (and where the feature aggregation functions are symmetric) ", "it makes sense that \"compositional networks\" generalize graph convolutional networks (and the more general message passing neural networks framework). ", "The most challenging problem, however, namely the problem of finding a concrete and suitable permutation invariant hierarchical decomposition of the nodes plus some aggregation/pooling functions to compute the feature representations is not addressed in sufficient detail. ", "The paper spends a lot of time on some theoretical definitions and (trivial) proofs ", "but then fails to make the connection to an approach that works in practice. ", "The description of the experiments and which compositional network is chosen and how it is chosen seems to be missing. ", "The only part hinting at the model that was actually used in the experiments is the second paragraph of the section 'Experimental Setup', consisting of one long sentence that is incomprehensible to me. ", "Instead of spending a lot of effort on the definitions and (somewhat trivial) propositions in the first half of the paper, the authors should spend much more time on detailing the experiments and the actual model that they used. ", "In an effort to make the framework as general as possible, you ended up making the paper highly verbose and difficult to follow. ", "Please address the following points or clarify in your rebuttal if I misunderstood something:- what precisely is the novel contribution of your work ", "(it cannot be \"compositional networks\" and the propositions concerning those ", "because these are just old concepts under new names)?", "- explain precisely (and/or more directly/less convoluted) how your model used in the experiments looks like; ", "why do you think it is better than the other methods?", "- given that compositional network is a very general concept (partially ordered set imposed on subsets of the graph vertices), ", "what is the principled set of steps one has to follow to arrive at such a compositional network tailored to a particular graph collection? ", "isn't (or shouldn't) that be the contribution of this work? ", "Am I missing something?", "In general, you should write the paper much more to the point and leave out unnecessary math (or move to an appendix). ", "The paper is currently highly inaccessible."], "labels": ["non-arg", "evaluation", "fact", "fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "request", "evaluation", "request", "fact", "evaluation", "request", "request", "fact", "request", "fact", "non-arg", "request", "evaluation"]} {"doc_id": "SJppHuogG", "text": ["re. Introduction, page 2: Briefly explain here how SAB is different from regular Attention?", "Good paper. ", "There's not that much discussion of the proposed SAB compared to regular Attention, ", "perhaps that could be expanded. ", "Also, I suggest summarizing the experimental findings in the Conclusion."], "labels": ["request", "evaluation", "evaluation", "request", "request"]} {"doc_id": "BJBU32dgz", "text": ["The paper \u2018Deep learning for Physical Process: incorporating prior physical knowledge\u2019 proposes to question the use of data-intensive strategies such as deep learning in solving physical inverse problems that are traditionally solved through assimilation strategies. ", "They notably show how physical priors on a given phenomenon can be incorporated in the learning process and propose an application on the problem of estimating sea surface temperature directly from a given collection of satellite images.", "All in all the paper is very clear and interesting. ", "The results obtained on the considered problem are clearly of great interest, especially when compared to state-of-the-art assimilation strategies such as the one of B\u00e9r\u00e9ziat. ", "While the learning architecture is not original in itself, ", "it is shown that a proper physical regularization greatly improves the performance. ", "For these reasons I believe the paper has sufficient merits to be published at ICLR. ", "That being said, I believe that some discussions could strengthen the paper:", " - Most classical variational assimilation schemes are stochastic in nature, notably by incorporating uncertainties in the observation or physical evolution models. ", "It is still unclear how those uncertainties can be integrated in the model;", " - Assimilation methods are usually independent of the type of data at hand. ", "It is not clear how the model learnt on one particular type of data transpose to other data sequences. ", "Notably, the question of transfer and generalization is of high relevance here. ", "Does the learnt model performs well on other dataset (for instance, acquired on a different region or at a distant time). ", "I believe this type of issue has to be examinated for this type of approach to be widely use in inverse physical problems."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "request"]} {"doc_id": "HyZBE0IlM", "text": ["This paper studies how the variance of the discriminator affect the gradient signal provided to the generator and therefore how it might limit its ability to learn the true data distribution.", "The approach suggested in this paper models the output of the discriminator using a mixture of two Gaussians (one for \u201cfake\u201d and the other for \u201cnot fake\u201d). ", "This seems like a rather crude approximation as the distribution of each \u201cclass\u201d is likely to be multimodal. ", "Can the authors comment on this? ", "Could they extend their approach to use a mixture of multimodal distributions?", "The paper mentions that fixing the means of the distribution can be \u201cproblematic during optimization as the discriminator\u2019s goal is to maximize the difference between these two means.\u201c. ", "This relates to my previous comment where the distribution might not be unimodal. ", "In this case, shifting the mean doesn\u2019t seem to be a good solution and might just yield to oscillations between different modes. ", "Can you please comment on this?", "Mode collapse: Can you comment on the behavior of your approach w.r.t. to mode collapse?", "Implementation details: How is the mean of the two Gaussians initialized? ", "Relation to instance noise and regularization techniques: Instance noise is a common trick being used to train GANs, ", "see e.g. http://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/", "This also relates to some regularization techniques, e.g. Roth et al., 2017 that provides a regularizer that amounts to convolving the densities with white Gaussian noise. ", "Can you please elaborate on the potential advantages of the proposed solution over these existing techniques?", "Comparison to existing baselines: Given that the paper addresses the stability problem, I would expect some empirical comparison to at least one or two of the stability methods cited in the introduction, e.g. Gulrajani et al., 2017 or Roth et al., 2017.", "Relation to Kernel MMD: Can the authors elaborate on how their method relates to approaches that replace the discriminator with MMD nets. e.g.", "- Training generative neural networks via Maximum Mean Discrepancy optimization, Dziugaite et al", "- Generative models and model criticism via optimized maximum mean discrepancy, Sutherland et al", "More explicitly, the variance in these methods can be controlled via the bandwidth of the kernel ", "and I therefore wonder what would one use a simple mixture of Gaussians instead?"], "labels": ["fact", "fact", "evaluation", "non-arg", "request", "fact", "fact", "evaluation", "non-arg", "request", "request", "fact", "reference", "evaluation", "request", "request", "request", "reference", "reference", "fact", "evaluation"]} {"doc_id": "r1IUXROxz", "text": ["The claimed results of \"combining transformations\" in the context of RC was done in the works of Herbert Jaeger on conceptors [1], ", "which also should be cited here.", "The argument of biological plausibility is not justified. ", "The authors use an echo-state neural network with standard tanh activations, which is as far away from real neuronal signal processing than ordinary RNNs used in the field, with the difference that the recurrent weights are not trained. ", "If the authors want to make the case of biological plausibility, they should use spiking neural networks.", "The experiment on MNIST seems artificial, in particular transforming the image into a time-series and thereby imposing an artificial temporal structure. ", "The assumption that column_i is obtained by information of column_{i-k},..,column_{i-1} is not true for images. ", "To make a point, the authors should use a datasets with related sets of time-series data, e.g EEG or NLP data.", "In total this paper does not have enough novelty for acceptance ", "and the experiments are not well chosen for this kind of work. ", "Also the authors overstate the claim of biological plausibility ", "(just because we don't train the recurrent weights does not make a method biologically plausible).", "[1] H. Jaeger (2014): Controlling Recurrent Neural Networks by Conceptors. Jacobs University technical report Nr 31 (195 pages)"], "labels": ["fact", "request", "evaluation", "fact", "request", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "reference"]} {"doc_id": "ByHD_eqxf", "text": ["Let me first note that I am not very familiar with the literature on program generation, molecule design or compiler theory, ", "which this paper draws heavily from, ", "so my review is an educated guess. ", "This paper proposes to include additional constraints into a VAE which generates discrete sequences, namely constraints enforcing both semantic and syntactic validity. ", "This is an extension to the Grammar VAE of Kusner et. al, which includes syntactic constraints but not semantic ones.", "These semantic constraints are formalized in the form of an attribute grammar, which is provided in addition to the context-free grammar.", "The authors evaluate their methods on two tasks, program generation and molecule generation. ", "Their method makes use of additional prior knowledge of semantics, ", "which seems task-specific and limits the generality of their model. ", "They report that their method outperforms the Character VAE (CVAE) and Grammar VAE (GVAE) of Kusner et. al. ", "However, it isn't clear whether the comparison is appropriate: ", "the authors report in the appendix that they use the kekulised version of the Zinc dataset of Kusner et. al, ", "whereas Kusner et. al do not make any mention of this. ", "The baselines they compare against for CVAE and GVAE in Table 1 are taken directly from Kusner et. al though. ", "Can the authors clarify whether the different methods they compare in Table 1 are all run on the same dataset format?", "Typos: - Page 5: \"while in sampling procedure\" -> \"while in the sampling procedure\"", "- Page 6: \"a deep convolution neural networks\" -> \"a deep convolutional neural network\"", "- Page 6: \"KL-divergence that proposed in\" -> \"KL-divergence that was proposed in\" ", "- Page 6: \"since in training time\" -> \"since at training time\"", "- Page 6: \"can effectively computed\" -> \"can effectively be computed\"", "- Page 7: \"reset for training\" -> \"rest for training\""], "labels": ["non-arg", "evaluation", "non-arg", "fact", "fact", "fact", "fact", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "fact", "request", "request", "request", "request", "request", "request", "request"]} {"doc_id": "Hk1CjNZyM", "text": ["This paper investigates the effect of replacing identity skip connections with trainable convolutional skip connections in ResNet. ", "The authors find that in their experiments, performance improves. ", "Therefore, the power of skip connections is due to their linearity rather than due to the fact that they represent the identity.", "Overall, the paper has a clear and simple message and is very readable. ", "The paper contains a good amount of experiments, ", "but in my opinion not quite enough to conclude that identity skip connections are inherently worse. ", "The question is then: how non-trivial is it that tandem networks work? ", "For someone who understands and has worked with ResNet and similar architectures, this is not a surprise. ", "Therefore, the paper is somewhat marginal but, I think, still worth accepting.", "Why did you choose a single learning rate for all architectures and datasets instead of choosing the optimal one for each archtitecture and dataset? ", "Was it a question of computational resources? ", "Using custom step sizes would strenghten your experimental results significantly. ", "In the absence of this, I would still ask that you create an appendix where you specify exactly how hyperparameters were chosen.", "Other comments:- \"and that it\u2019s easier for a layer to learn from a starting point of keeping things the same (the identity map) than from the zero map\" I don't understand this comment. ", "Networks without skip connections are not initialized to the zero map but have nonzero, usually Gaussian, weights.", "- in section 2, reason (ii), you seem to imply that it is a good thing if a network behaves as an ensemble of shallower networks. ", "In general, this is a bad thing. ", "Therefore, the fact that ResNet with tandom networks is an ensemble of shallower networks is a reason for why it might perform badly, not well. ", "I would suggest removing reason (ii).", "- in section 3, reason (iii), you state that removing nonlinearities from the skip path can improve performance. ", "However, using tandom blocks instead of identity skip connections does not change the number of nonlinearity layers. ", "Therefore, I do not see how reason (iii) applies to tandem networks.", "- \"The best blocks in each challenge were competitive with the best published results for their numbers of parameters; see Table 2 for the breakdown.\" ", "What are the best published results? ", "I do not see them in table 2."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "non-arg", "request", "request", "evaluation", "fact", "fact", "evaluation", "fact", "request", "fact", "fact", "evaluation", "quote", "request", "fact"]} {"doc_id": "rJ54aZ9gG", "text": ["This paper presents a very thorough empirical exploration of the qualities and limitations of very simple word-embedding based models. ", "Average and/or max pooling over word embeddings (which are initialized from pretrained embeddings) is used to obtain a fixed-length representation for natural language sequences, which is then fed through a single layer MLP classifier. ", "In many of the 9 evaluation tasks, this approach is found to match or outperform single-layer CNNs or RNNs.", "The varied findings are very clearly presented and helpfully summarized, ", "and for each task setting the authors perform an insightful analysis.", "My only criticism would be the fact that the study is limited to English, ", "even though the conclusions are explicitly scoped in light of this. ", "Moreover, I wonder how well the findings would hold in a setting with a more severe OOV problem than is perhaps present in the studied datasets.", "Besides concluding from the presented results that these SWEMs should be considered a strong baseline in future work, ", "one might also conclude that we need more challenging datasets!", "Minor things:- It wasn't entirely clear how the text matching tasks are encoded. ", "Are the two sequences combined into a single sequence before applying the model, or something else? ", "I might have missed this detail.", "- Given the two ways of using the Glove embeddings for initialization (direct update vs mapping them with an MLP into the task space), it would be helpful to know which one ended up being used (i.e. optimal) in each setting.", "- Something went wrong with the font size for the remainder of the text near Figure 1."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "fact"]} {"doc_id": "SkugmHtgf", "text": ["This paper proposes an optimization problem whose Lagrangian duals contain many existing objective functions for generative models. ", "Using this framework, the paper tries to generalize the optimization problems by defining computationally-tractable family which can be expressed in terms of existing objective functions. ", "The paper has interesting elements ", "and the results are original. ", "The main issue is that the significance is unclear. ", "The writing in Section 3 is unclear for me, ", "which further made it challenging to understand the consequences of the theorems presented in that section. ", "Here is a big-picture question that I would like to know answer for. ", "Do the results of sec 3 help us identify a more useful/computationally tractable model than exiting approaches? ", "Clarification on this will help me evaluate the significance of the paper.", "I have three main clarification points. ", "First, what is the importance of T1, T2, and T3 classes defined in Def. 7, i.e., why are these classes useful in solving some problems? ", "Second, is the opposite relationship in Theorem 1, 2, and 3 true as well, e.g., is every linear combination of beta-ELBO and VMI is equivalent to a likelihood-based computable-objective of KL info-encoding family? ", "Is the same true for other theorems?", "Third, the objective of section 3 is to show that \"only some choices of lambda lead to a dual with a tractable equivalent form\". ", "Could you rewrite the theorems so that they truly reflect this, rather than stating something which only indirectly imply the main claim of the paper.", "Some small comments:- Eq. 4. It might help to define MI to remind readers.", "- After Eq. 7, please add a proof (may be in the Appendix). ", "It is not that straightforward to see this. ", "Also, I suppose you are saying Eq. 3 but with f from Eq. 4.", "- Line after Eq. 8, D_i is \"one\" of the following... ", "Is it always the same D_i for all i or it could be different? ", "Make this more clear to avoid confusion.", "- Last line in Para after Eq. 15, \"This neutrality corresponds to the observations made in..\" ", "It might be useful to add a line explaining that particular \"observation\"", "- Def. 7, the names did not make much sense to me. ", "You can add a line explaining why this name is chosen.", "- Def. 8, the last equation is unclear. ", "Does the first equivalence impy the next one? ", "- Writing in Sec. 3.3 can be improved. ", "e.g., \"all linear operations on log prob.\" is very unclear, ", "\"stated computational constraints\" which constraints?"], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "non-arg", "request", "evaluation", "non-arg", "request", "request", "request", "fact", "request", "request", "request", "evaluation", "evaluation", "fact", "request", "request", "quote", "request", "evaluation", "request", "evaluation", "request", "request", "evaluation", "request"]} {"doc_id": "S1sHPAWgz", "text": ["Overall: I really enjoyed reading this paper and think the question is super important.", "I have some reservations about the execution of the experiments as well as some of the conclusions drawn.", "For this reason I am currently a weak reject", "(weak because I believe the question is very interesting).", "However, I believe that many of my criticisms can be assuaged during the rebuttal period.", "Paper Summary: For RL to play video games, it has to play many many many many times.", "In fact, many more times than a human", "where prior knowledge lets us learn quite fast in new (but related) environments.", "The authors study, using experiments, what aspects of human priors are the important parts.", "The authors\u2019 Main Claim appears to be: \u201cWhile common wisdom might suggest that prior knowledge about game semantics such as ladders are to be climbed, jumping on spikes is dangerous or the agent must fetch the key before reaching the door are crucial to human performance, we find that instead more general and high-level priors such as the world is composed of objects, object like entities are used as subgoals for exploration, and things that look the same, act the same are more critical.\u201d", "Overall, I find this interesting.", "However, I am not completely convinced by some of the experimental demonstrations.", "Issue 0: The experiments seem underpowered / not that well analyzed.", "There are only 30 participants per condition", "and so it\u2019s hard to tell whether the large differences in conditions are due to noise and what a stable ranking of conditions actually looks like.", "I would recommend that the authors triple the sample size and be more clear about reporting the outcomes in each of the conditions.", "It\u2019s not clear what the error bars in figure 1 represent,", "are they standard deviations of the mean?", "Are they standard deviations of the data?", "Are they confidence intervals for the mean effect?", "Did you collect any extra data about participants?", "One potentially helpful example is asking how familiar participants are with platformer video games.", "This would give at least some proxy to study the importance of priors about \u201chow video games are generally constructed\u201d rather than priors like \u201cobjects are special\u201d.", "Issue 1: What do you mean by \u201cobjects\u201d?", "The authors interpret the fact that performance falls so much between conditions b and c to mean that human priors about \u201cobjects are special\u201d are very important.", "However, an alternative explanation is that people explore things which look \u201cdifferent\u201d (ie. Orange when everything else is black).", "The problem here comes from an unclear definition of what the authors mean by an \u201cobject\u201d", "so in revision I would like authors to clarify what precisely they mean by a prior about \u201cthe world is composed of objects\u201d and how this particular experiment differentiates \u201cobject\u201d from a more general prior about \u201cvideo games have clearly defined goals, there are 4 clearly defined boxes here, let me try touching them.\u201d", "This is important", "because a clear definition will give us an idea for how to actually build this prior into AI systems.", "Issue 2: Are the results here really about \u201chigh level\u201d priors?", "There are two ways to interpret the authors\u2019 main claim:", "the strong version would maintain that semantic priors aren\u2019t important at all.", "There is no real evidence here for the strong version of the claim.", "A real test would be to reverse some of the expected game semantics and see if people perform just as well as in the \u201cmasked semantics\u201d condition.", "For example, suppose we had exactly the same game and N different types of objects in various places of the game where N-1 of them caused death but 1 of them opened the door (but it wasn\u2019t the object that looked like a key).", "My hypothesis would be that performance would fall drastically as semantic priors would quickly lead people in that direction.", "Thus, we could consider a weaker version of the claim: semantic priors are important but even in the absence of explicit semantic cues (note, this is different from having the wrong semantic cues as above) people can do a good job on the game.", "This is much more supported by the data,", "but still I think very particular to this situation.", "Imagine a slight twist on the game: There is a sword (with a lock on it), a key, a slime and the door (and maybe some spikes).", "The player must do things in exactly this order: first the player must get the key, then they must touch the sword, then they must kill the slime, then they go to the door.", "Here without semantic priors I would hypothesize that human performance would fall quite far (whereas with semantics people would be able to figure it out quite well).", "Thus, I think the authors\u2019 claim needs to be qualified quite a bit.", "It\u2019s also important to take into account how much work general priors about video game playing (games have goals, up jumps, there is basic physics) are doing here", "(the authors do this when they discuss versions of the game with different physics)."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "request", "request", "request", "request", "evaluation", "request", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact"]} {"doc_id": "H1g6cQsxM", "text": ["This paper tackles the overfitting problem when training neural networks based on regularization technique. ", "More precisely, the authors propose new regularization terms that are related to the underlying virtual geometrical transformations (shift, rotation and scale) of the input data (signal, image and video). ", "By formalizing the geometrical transformation process of a given image, the authors deduce constraints on the objective function which depend on the magnitude of the applied transformation. ", "The proposed method is compared to three methods: one baseline and two methods of the literature (AT and VAT). ", "The comparison is done on three datasets (synthetic data, MNIST and CIFAR10) in terms of test errors (for classification problems) and running time.", "The paper is well formalized ", "and the idea is interesting. ", "The regularization approach is novel compared to the methods of the literature. ", "Main concerns: 1)\tThe experimental validation of the proposed approach is not consistent:", "The description of the baseline method is not detailed in the paper. ", "A priori, the baseline should naturally be the method without your regularization terms.", "But, this seems to be contrary with what you displayed in Figure 3. ", "Indeed, in Figure 3, there is three different graphs for the baseline method (i.e., one for each regularization term). ", "It seems that the baseline method depends on the different kinds of regularization term, ", "why? ", "Same question for AT and VAT methods. ", "In practice, what is the magnitude of the perturbations? ", "Please, explain the axis of all the figures. ", "Please, explain how do you mix your different regularization terms in your method that you call VMT-all? ", "All the following points are related to the experiment for which you presented the results in Table 2: ", "Please, provide the results of all your methods on the synthetic dataset (only VMT-shift is provided). ", "What is VMF? ", "Do you mean VMT? ", "For the evaluations, it would be more rigorous to re-implement also the state-of-the-art methods for which you only give the results that they report in their paper. ", "Especially, because you re-implemented AT with L-2 constraint, ", "so, it seems straightforward to re-implement also AT with L-infinite constraint. ", "Same remark for the dropout regularization technique, ", "which is easy to re-implement on the dense layers of your neural networks, within the Tensorflow framework. ", "As you mentioned, your main contribution is related to running time, ", "thus, you should give the running time in all experiments. ", "2)\tThe method seems to be a tradeoff between accuracy and running time:", "The VAT method performs better than all your methods in all the datasets. ", "The baseline method is faster than all the methods (Table 3). ", "This being said, the proposed method should be clearly presented in the paper as a tradeoff between accuracy and running time. ", "3)\tThe positioning of the proposed approach is not so clear: ", "As mentioned above, your method is a tradeoff between accuracy and running time. ", "But you also mentioned (top of page 2) that the contribution of your paper is also related to the interpretability in terms of \u2018\u2019Human perception\u2019\u2019. ", "Indeed, you clearly mentioned that the methods of the literature lacks interpretability. ", "You also mentioned that your method is more \u2018\u2019geometrically\u2019\u2019 interpretable than methods of the literature. ", "The link between interpretability in terms of \u201chuman perception\u201d and \u201cgeometry\u201d is not obvious. ", "Anyway, the interpretability point is not sufficiently demonstrated, or at least, discussed in the paper. ", "4)\tMany typos in the paper : ", "Section 1: \u201cfarward-backward\u201d", "Section 2.1: \u201cwe define the movement field V of as a n+1\u2026\u201d", "Section 2.2: \u201clable\u201d - \u201cthe another\u201d - \u201cof how it are generated\u201d ", "\u2013 Sentence \u201cSince V is normalized.\u201d seems incomplete\u2026 ", "- \\mathcal{L} not defined ", "- Please, precise the simplifications like \\mathcal{L}_{\\theta} to \\mathcal{L} ", "Section 3: \u201cDISCUSSTION\u201d", "Section 4.1: \u201cnegtive\u201d", "Figure 2: \u201cnegetive\u201d", "Table 2: \u201cVMF\u201d", "Section 4.2: \u201cTab 2.3\u201d does not exist ", "Section 4.3: \u201cconsists 9 convolutional\u201d \u2013 \u201cnerual networks\u201d\u2026", "Please, always use the \\eqref latex command to refer to equations.", "There is many others typos in the paper, ", "so, please proofread the paper\u2026"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "request", "request", "request", "request", "fact", "request", "request", "request", "request", "fact", "request", "request", "evaluation", "fact", "request", "evaluation", "fact", "fact", "request", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "fact", "request", "request", "evaluation", "request"]} {"doc_id": "SkpfobogG", "text": ["The authors tackle the problem of estimating risk in a survival analysis setting with competing risks.", "They propose directly optimizing the time-dependent discrimination index using a siamese survival network.", "Experiments on several real-world dataset reveal modest gains in comparison with the state of the art.", "- The authors should clearly highlight what is their main technical contribution.", "For example, Eqs. 1-6 appear to be background material", "since the time-dependent discrimination index is taken from the literature, as the authors point out earlier.", "However, this is unclear from the writing.", "- One of the main motivations of the authors is to propose a model that is specially design to avoid the nonidentifiability issue in an scenario with competing risks.", "It is unclear why the authors solution is able to solve such an issue, specially given the modest reported gains in comparison with several competitive baselines.", "In other words, the authors oversell their own work, specially in comparison with the state of the art.", "- The authors use off-the-shelf siamese networks for their settting", "and thus it is questionable there is any novelty there.", "The application/setting may be novel,", "but not the architecture of choice.", "- From Eq. 4 to Eq. 5, the authors argue that the denominator does not depend on the model parameters and can be ignored.", "However, afterwards the objective does combine time-dependent discrimination indices of several competing risks, with different denominator values.", "This could be problematic if the risks are unbalanced.", "- The competitive gain of the authors method in comparison with other competing methods is minor.", "- The authors introduce F(t, D | x) as cumulative incidence function (CDF) at the beginning of section 2,", "however, afterwards they use R^m(t, x), which they define as risk of the subject experiencing event m before t.", "Is the latter a proxy for the former?", "How are they related?"], "labels": ["fact", "fact", "evaluation", "request", "fact", "fact", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "request", "request"]} {"doc_id": "rynGrnpeM", "text": ["This paper addresses multiple issues arising from the fact that commonly reported best model performance numbers are a single sample from a performance distribution. ", "These problems are very real, ", "and they deserve significant attention from the ML community. ", "However, I feel that the proposed solution may actually compound the issues highlighted.", "Firstly, the proposed metric requires calculation of multiple test set experiments for every evaluation. ", "In the paper up to 100 experiments were used. ", "This may be reasonable in scenarios where the test set is hidden, and individual test numbers are never revealed. ", "It also may be reasonable if we cynically assume that researchers are already running many test-set evaluations. ", "But I am very opposed to any suggestion that we should relax the maxim that the test set should be used only once, or as close to once as is possible. ", "Even the idea of researchers knowing their test set variance makes me very uneasy.", "Secondly, this paper tries to account for variation in results due to different degrees of hyper-parameter tuning. ", "This is certainly an admirable aim, ", "since different research groups have access to very different types of resources. ", "However, the suggested approach relies on randomly picking hyper-parameters from \"a range that we previously found to work reasonably well\". ", "This randomization does not account for the many experiments that were required to find this range. ", "And the randomization is also not extended to parameters controlling the model architecture ", "(I suspect that a number of experiments went into picking the 32 layers in the ResNet used by this paper). ", "Without a solid and consistent basis for these hyper-parameter perturbations, I worry that this approach will fail to normalize the effect of experiment numbers while also giving researchers an excuse to avoid reporting their experimental process.", "I think this is a nice idea ", "and the metric does merge the stability and low variance of mean score with the aspirations of best score. ", "The metric may be very useful at development time in helping researchers build a reasonable expectation of test time performance in cases where the dev and test sets are strongly correlated. ", "However, for the reasons outlined above, I don't think the proposed approach solves the problems that it addresses. ", "Ultimately, the decision about this paper is a subjective one. ", "Are we willing to increase the risk of inadvertent hyper-parameter tuning on the test set for the sake of a more stable metric?"], "labels": ["fact", "evaluation", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation"]} {"doc_id": "HkCuu2YxG", "text": ["This paper misses the point of what VAEs (or GANs, in general) are used for.", "The idea of using VAEs is not to encode and decode images (or in general any input), but to recover the generating process that created those images so we have an unlimited source of samples.", "The use of these techniques for compressing is still unclear", "and their quality today is too low.", "So the attack that the authors are proposing does not make sense", "and my take is that we should see significant changes before they can make sense.", "But let\u2019s assume that at some point they can be used as the authors propose.", "In which one person encodes an image, send the latent variable to a friend, but a foe intercepts it on the way and tampers with it so the receiver recovers the wrong image without knowing.", "Now if the sender believes the sample can be tampered with, if the sender codes z with his private key would not make the attack useless?", "I think this will make the first attack useless.", "The other two attacks require that the foe is inserted in the middle of the training of the VAE.", "This is even less doable,", "because the encoder and decoder are not train remotely.", "They are train of the same machine or cluster in a controlled manner by the person that would use the system.", "Once it is train it will give away the decoder and keep the encoder for sending information."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "fact"]} {"doc_id": "rkBmo9ryM", "text": ["In their paper \"CausalGAN: Learning Causal implicit Generative Models with adv. training\" the authors address the following issue: Given a causal structure between \"labels\" of an image (e.g. gender, mustache, smiling, etc.), one tries to learn a causal model between these variables and the image itself from observational data. ", "Here, the image is considered to be an effect of all the labels. ", "Such a causal model allows us to not only sample from conditional observational distributions, but also from intervention distributions. ", "These tasks are clearly different, ", "as nicely shown by the authors' example of \"do(mustache = 1)\" versus \"given mustache = 1\" (a sample from the latter distribution contains only men). ", "The paper does not aim at learning causal structure from data ", "(as clearly stated by the authors). ", "The example images look convincing to me.", "I like the idea of this paper. ", "IMO, it is a very nice, clean, and useful approach of combining causality and the expressive power of neural networks. ", "The paper has the potential of conveying the message of causality into the ICLR community and thereby trigger other ideas in that area. ", "For me, it is not easy to judge the novelty of the approach, ", "but the authors list related works, ", "none of which seems to solve the same task. ", "The presentation of the paper, however, should be improved significantly before publication. ", "(In fact, because of the presentation of the paper, I was hesitating whether I should suggest acceptance.) ", "Below, I give some examples (and suggest improvements), ", "but there are many others. ", "There is a risk that in its current state the paper will not generate much impact,", "and that would be a pity. ", "I would therefore like to ask the authors to put a lot of effort into improving the presentation of the paper. ", "- I believe that I understand the authors' intention of the caption of Fig. 1, ", "but \"samples outside the dataset\" is a misleading formulation. ", "Any reasonable model does more than just reproducing the data points. ", "I find the argumentation the authors give in Figure 6 much sharper. ", "Even better: add the expression \"P(male = 1 | mustache = 1) = 1\". ", "Then, the difference is crystal clear.", "- The difference between Figures 1, 4, and 6 could be clarified. ", "- The list of \"prior work on learning causal graphs\" seems a bit random. ", "I would add Spirtes et al 2000, Heckermann et al 1999, Peters et al 2016, and Chickering et al 2002. ", "- Male -> Bald does not make much sense causally ", "(it should be Gender -> Baldness)... ", "Aha, now I understand: ", "The authors seem to switch between \"Gender\" and \"Male\" being random variables. ", "Make this consistent, please. ", "- There are many typos and comma mistakes. ", "- I would introduce the do-notation much earlier. ", "The paragraph on p. 2 is now written without do-notation ", "(\"intervening Mustache = 1 would not change the distribution\"). ", "But this way, the statements are at least very confusing ", "(which one is \"the distribution\"?).", "- I would get rid of the concept of CiGM. ", "To me, it seems that this is a causal model with a neural network (NN) modeling the functions that appear in the SCM. ", "This means, it's \"just\" using NNs as a model class. ", "Instead, one could just say that one wants to learn a causal model and the proposed procedure is called CausalGAN? ", "(This would also clarify the paper's contribution.)", "- many realizations = one sample (not samples), I think. ", "- Fig 1: which model is used to generate the conditional sample? ", "- The notation changes between E and N and Z for the noises. ", "I believe that N is supposed to be the noise in the SCM, ", "but then maybe it should not be called E at the beginning. ", "- I believe Prop 1 (as it is stated) is wrong. ", "For a reference, see Peters, Janzing, Scholkopf: Elements of Causal Inference: Foundations and Learning Algorithms (available as pdf), Definition 6.32. ", "One requires the strict positivity of the densities (to properly define conditionals). ", "Also, I believe the Z should be a vector, not a set. ", "- Below eq. (1), I am not sure what the V in P_V refers to.", "- The concept of data probability density function seems weird to me. ", "Either it is referring to the fitted model, then it's a bad name, ", "or it's an empirical distribution, then there is no pdf, but a pmf.", "- Many subscripts are used without explanation. ", "r -> real? ", "g -> generating? ", "G -> generating? ", "Sometimes, no subscripts are used (e.g., Fig 4 or figures in Sec. 8.13)", "- I would get rid of Theorem 1 and explain it in words for the following reasons. ", "(1) What is an \"informal\" theorem? ", "(2) It refers to equations appearing much later. ", "(3) It is stated again later as Theorem 2. ", "- Also: the name P_g does not appear anywhere else in the theorem, I think. ", "- Furthermore, I would reformulate the theorem. ", "The main point is that the intervention distributions are correct ", "(this fact seems to be there, ", "but is \"hidden\" in the CIGN notation in the corollary).", "- Re. the formulation in Thm 2: is it clear that there is a unique global optimum ", "(my intuition would say there could be several), ", "thus: better write \"_a_ global minimum\"?", "- Fig. 3 was not very clear to me. ", "I suggest to put more information into its caption. ", "- In particular, why is the dataset not used for the causal controller? ", "I thought, that it should model the joint (empirical) distribution over the labels, ", "and this is part of the dataset. ", "Am I missing sth?", "- IMO, the structure of the paper can be improved. ", "Currently, Section 3 is called \"Background\" ", "which does not say much. ", "Section 4 contains CIGMs, Section 5 Causal GANs, 5.1. Causal Controller, 5.2. CausalGAN, 5.2.1. Architecture (which the causal controller is part of) etc. ", "An alternative could be: Sec 1: Introduction Sec 1.1: Related Work Sec 2: Causal Models Sec 2.1: Causal Models using Generative Models (old: CIGM) Sec 3: Causal GANs Sec 3.1: Architecture (including controller) Sec 3.2: loss functions ... Sec 4: Empricial Results (old: Sec. 6: Results)", "- \"Causal Graph 1\" is not a proper reference ", "(it's Fig 23 I guess). ", "Also, it is quite important for the paper, ", "I think it should be in the main part. ", "- There are different references to the \"Appendix\", \"Suppl. Material\", or \"Sec. 8\" ", "-- please be consistent ", "(and try to avoid ambiguity by being more specific ", "-- the appendix contains ~20 pages). ", "Have I missed the reference to the proof of Thm 2?", "- 8.1. contains copy-paste from the main text.", "- \"proposition from Goodfellow\" -> please be more precise", "- What is Fig 8 used for? ", "Is it not sufficient to have and discuss Fig 23? ", "- IMO, Section 5.3. should be rewritten (also, maybe include another reference for BEGAN).", "- There is a reference to Lemma 15. ", "However, I have not found that lemma.", "- I think it's quite interesting that the framework seems to also allow answering counterfactual questions for realizations that have been sampled from the model, see Fig 16. ", "This is the case since for the generated realizations, the noise values are known. ", "The authors may think about including a comment on that issue.", "- Since this paper's main proposal is a methodological one, ", "I would make the publication conditional on the fact that code is released."], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "request", "evaluation", "non-arg", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "request", "evaluation", "request", "non-arg", "fact", "request", "evaluation", "request", "fact", "quote", "evaluation", "evaluation", "request", "fact", "fact", "request", "evaluation", "evaluation", "request", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "request", "fact", "request", "request", "evaluation", "fact", "fact", "request", "fact", "fact", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "request", "evaluation", "fact", "non-arg", "evaluation", "fact", "evaluation", "fact", "request", "evaluation", "evaluation", "evaluation", "request", "fact", "request", "request", "fact", "non-arg", "fact", "request", "request", "request", "request", "fact", "fact", "evaluation", "fact", "request", "evaluation", "request"]} {"doc_id": "r1gHCrFlM", "text": ["This paper studies off-policy learning in the bandit setting. ", "It develops a new learning objective where the empirical risk is regularized by the squared Chi-2 divergence between the new and old policy. ", "This objective is motivated by a bound on the empirical risk, where this divergence appears. ", "The authors propose to solve this objective by using generative adversarial networks for variational divergence minimization (f-GAN). ", "The algorithm is then evaluated on settings derived from supervised learning tasks and compared to other algorithms.", "I find the paper well written and clear. ", "I like that the proposed method is both supported by theory and empirical results. ", "Minor point: I do not really agree with the discussion on the impact of the stochasticity of the logging policy in section 5.6. ", "Based on Figure 5 a and b, it seems that the learned policy is performing equally well no matter how stochastic the logging policy is. ", "So I find it a bit misleading to suggest that the learned policy are not being improved when the logging policy is more deterministic. ", "Rather, the gap reduces between the two policies ", "because the logging policy gets better. ", "In order to better showcase this mechanism, perhaps you could try using a logging policy that does not favor the best action.", "quality and clarity: ++ code made available", "+ well written and clear", "- The proof of theorem 2 is not in the paper nor appendix ", "(the authors say it is similar to another work).", "originality + good extension of the work by Swaminathan & Joachims (2015a): derivation of an alternative objective and use of a deep networks. ", "This paper leverages a set of diverse results", "significance - The proposed method can only be applied if propensity scores were recorded when the data was generated.", "- no test on a real setting", "++ The proposed method is supported both by theoretical insights and empirical experiments.", "+ empirical improvement with respect to previous methods", "details/typos: 3.1, p3: R^(h) has an indexed parenthesis", "5.2; and we more details", "5.3: so that results more comparable"], "labels": ["fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "request", "fact", "evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact"]} {"doc_id": "ByC55Gcxz", "text": ["Clearly presented paper, including a number of reasonable techniques to improve LSTM-LMs.", "The proposed techniques are heuristic,", "but are reasonable and appear to yield improvements in perplexity.", "Some specific comments follow.", "re. \"ASGD\" for Averaged SGD: ASGD usually stands for Asynchronous SGD,", "have the authors considered an alternative acronym?", "AvSGD?", "re. Optimization criterion on page 2, note that SGD is usually taken to minimizing expected loss, not just empirical loss (Bottou thesis 1991).", "Is there any theoretical analysis of convergence for Averaged SGD?", "re. paragraph starting with \"To prevent such inefficient data usage, we randomly select the sequence length for the forward and backward pass in two steps\":", "the explanation is a bit unclear.", "What is the \"base sequence length\" exactly?", "Also, re. the motivation above this paragraph, I'm not sure what \"elements\" really refers to, though I can guess.", "What is the number of training tokens of the datasets used, PTB and WT2?", "Can the authors provide more explanation for what \"neural cache models\" are, and how they relate to \"pointer models\"?", "Why do the sections \"Pointer models\", \"Ablation analysis\", and \"AWD-QRNN\" come after the Experiments section?"], "labels": ["evaluation", "fact", "evaluation", "non-arg", "evaluation", "request", "request", "fact", "request", "quote", "evaluation", "request", "evaluation", "request", "request", "request"]} {"doc_id": "SJBZut5gM", "text": ["This paper examines ways of producing word embeddings for rare words on demand. ", "The key real-world use case is for domain specific terms, ", "but here the techniques are demonstrated on rarer words in standard data sets. ", "The strength of this paper is that it both gives a more systematic framework for and builds on existing ideas (character-based models, using dictionary definitions) to implement them as part of a model trained on the end task.", "The contribution is clear but not huge. ", "In general, for the scope of the paper, it seems like what is here could fairly easily have been made into a short paper for other conferences that have that category. ", "The basic method easily fits within 3 pages, ", "and while the presentation of the experiments would need to be much briefer, ", "this seems quite possible. ", "More things could have been considered. ", "Some appear in the paper, ", "and there are some fairly natural other ones such as mining some use contexts of a word (such as just from Google snippets) rather than only using textual definitions from wordnet. ", "The contributions are showing that existing work using character-level models and definitions can be improved by optimizing representation learning in the context of the final task, and the idea of adding a learned linear transformation matrix inside the mean pooling model (p.3). ", "However, it is not made very clear why this matrix is needed or what the qualitative effect of its addition is.", "The paper is clearly written. ", "A paper that should be referred to is the (short) paper of ", "Dhingra et al. (2017): A Comparative Study of Word Embeddings for Reading Comprehension https://arxiv.org/pdf/1703.00993.pdf . ", "While it in no way covers the same ground as this paper ", "it is relevant as follows: ", "This paper assumes a baseline that is also described in that paper of using a fixed vocab and mapping other words to UNK. ", "However, they point out that at least for matching tasks like QA and NLI that one can do better by assigning random vectors on the fly to unknown words. ", "That method could also be considered as a possible approach to compare against here.", "Other comments: - The paper suggests a couple of times including at the end of the 2nd Intro paragraph that you can't really expect spelling models to perform well in representing the semantics of arbitrary words (which are not morphological derivations, etc.). ", "While this argument has intuitive appeal, ", "it seems to fly in the face of the fact that actually spelling models, including in this paper, seem to do surprisingly well at learning such arbitrary semantics.", " - p.2: You use pretrained GloVe vectors that you do not update. ", "My impression is that people have had mixed results, sometimes better, sometimes worse with updating pretrained vectors or not. ", "Did you try it both ways?", " - fn. 1: Perhaps slightly exaggerates the point being made, ", "since people usually also get good results with the GloVe or word2vec model trained on \"only\" 6 billion words \u2013 2 orders of magnitude less data.", " - p.4. When no definition is available, is making e_d(w) a zero vector worse than or about the same as using a trained UNK vector?", " - Table 1: The baseline seems reasonable (near enough to the quality of the original Salesforce model from 2016 (66 F1) but well below current best single models of around 76-78 F1. ", "The difference between D1 and D3 does well illustrate that better definition learning is done with backprop from end objective. ", "This model shows the rather strong performance of spelling models \u2013 at least on this task ", "\u2013 which he again benefit from training in the context of the end objective. ", " - Fig 2: It's weird that only the +dict (left) model learns to connect \"In\" and \"where\". ", "The point made in the text between \"Where\" and \"overseas\" is perfectly reasonable, ", "but it is a mystery why the base model on the right doesn't learn to associate the common words \"where\" and \"in\" both commonly expressing a location.", " - Table 2: These results are interestingly different. ", "Dict is much more useful than spelling here. ", "I guess that is because of the nature of NLI, ", "but it isn't 100% clear why NLI benefits so much more than QA from definitional knowledge.", " - p.7: I was slightly surprised by how small vocabs (3k and 5k words) are said to be optimal for NLI (and similar remarks hold for SQuAD). ", "My impression is that most papers on NLI use much larger vocabs, no?", " - Fig 3: This could really be drawn considerably better: ", "make the dots bigger and their colors more distinct.", " - Table 3: The differences here are quite small and perhaps the least compelling, but the same trends hold."], "labels": ["fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "reference", "evaluation", "evaluation", "fact", "fact", "request", "fact", "evaluation", "evaluation", "fact", "evaluation", "non-arg", "evaluation", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation"]} {"doc_id": "SyVv9AjWG", "text": ["The authors present a neural embedding technique using a hyperbolic space.", "The idea of embedding data into a space that is not Euclidean is not new.", "There have been attempts to project onto (hyper)spheres.", "Also, the proposal bears some resemblance with what is done in t-SNE, where an (exponential) distortion of distances is induced. ", "Discussing this potential similarity would certainly broaden the readership of the paper.", "The organisation of the paper might be improved, with a clearer red line and fewer digressions.", "The call to the very small appendix via eq. 17 is an example.", "The position of Table in the paper is odd as well.", "The order of examples in Fig.5 differs from the order in the list.", "The experiments are well illustrative but rather small sized.", "The qualitative assessment is always interesting ", "and it is completed with some label prediction task.", "Due the geometrical consideretations developed in the paper, other quality criteria like e.g. how well neighbourhoods are preserved in the embeddings would give some more insights.", "All in all the idea developed in the paper sounds interesting ", "but the paper organisation seems a bit loose ", "and additional aspects should be investigated."], "labels": ["fact", "fact", "fact", "evaluation", "request", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "request"]} {"doc_id": "Hk-OLVKeM", "text": ["The authors propose to speed up RL techniques, such as DQN, by utilizing expert demonstrations. ", "The expert demonstrations are sequences of consecutive states that do not include actions, which is closer to a real setting of imitation learning. ", "The goal of this process is to extract a function that maps any given state to a subgoal. ", "Subgoals are then used to learn different Q-value functions, one per subgoal. ", "To learn the function that maps states into subgoals, the authors propose a surrogate reward model that corresponds to the angle between: the difference between two consecutive states (which captures velocity or direction) and a given subgoal. ", "A von Mises- Fisher distribution policy is then assumed to be used by the expert to generate actions that guide the agent toward the subgoal. ", "Finally, the mapping function state->subgoal is learned by performing a gradient descent on the expected total cost (based on the surrogate reward function, which also has free parameters that need to be learned).", "Finally, the authors use the DQN platform to learn a Q-value function using the learned surrogate reward function that guides the agent to specific subgoals, depending on the situation.", "The paper is overall well-written, ", "and the proposed idea seems interesting. ", "However, there are rather little explanations provided to argue for the different modeling choices made, and the intuition behind them. ", "From my understanding, the idea of subgoal learning boils down to a non-parametric (or kernel) regression where each state is mapped to a subgoal based on its closeness to different states in the expert's demonstration. ", "It is not clear how this method would generalize to new situations. ", "There is also the issue of keeping tracking of a large number of demonstration states in memory. ", "This technique reminds me of some common methods in learning from demonstrations, such as those using GPs or GMMs, ", "but the novelty of this technique is the fact that the subgoal mapping function is learned in an IRL fashion, by tacking into account the sum of surrogate rewards in the expert's demonstration. ", "The architecture of the action value estimator does not seem novel, ", "it's basically just an extension of DQN with an extra parameter (subgoal g).", "The empirical evaluation seems rather mixed. ", "Figure 3 shows that the proposed method learns faster than DQN, ", "but Table I shows that the improvement is not statistically significant, except in two games, DefendCenter and PredictPosition. ", "Are these the results after all agents had converged? ", "Overall, this is a good paper, ", "but focusing on only a single game (Doom) is a weakness that needs to be addressed ", "because one cannot tell if the choices were tailored to make the method work well for this game. ", "Since the paper does not provide significant theoretical or algorithmic contribution, ", "at least more realistic and diverse experiments should be performed."], "labels": ["fact", "fact", "fact", "fact", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "fact", "non-arg", "evaluation", "request", "fact", "evaluation", "request"]} {"doc_id": "SycLo2FxG", "text": ["Summary The paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al).", "The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors,", "b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors,", "c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix.", "The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task.", "Pro: - The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning.", "- Provides valuable insight into the MAML objective and its relation to probabilistic models", "Con: - The paper is generally well-written", "but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details).", "- The toy example is quite difficult to interpret the first time around", "and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML)", "- I do not think the empirical results provide enough evidence that it is a useful/robust method.", "Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to.", "Detailed comments/questions: - The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view.", "It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization", "as the truncation does not result in the approximation being created at a mode.", "Could the authors perhaps comment on: a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence?", "b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate)", "- Sec 4.1, p5: Last equation: Perhaps useful to explain the term $log(\\phi_j^* | \\theta)$ and why it is not in subroutine 4 .", "Should $\\phi^*$ be $\\hat \\phi$ ?", "- Sec 4.2: \u201cA straightforward\u2026\u201d: I think it would improve readability to refer back to the to the previous equation (i.e. H) such that it is clear what is meant by \u201cstraightforward\u201d.", "- Sec 4.2: Several ideas are being discussed in Sec 4.2", "and it is not entirely clear to me what has actually been adopted here;", "perhaps consider formalizing the actual computations in Subroutine 4 \u2013 and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of \\theta.", "- It is not clear from the text or experiment how the learning parameters are set.", "- Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1;", "e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5,", "the blue points are not defined in the caption,", "the terminology e.g. \u201cpre-update density\u201d is new at this point.", "I think it would benefit the readability to provide the reader with a bit more guidance.", "- Sec 5.1: While the qualitative example is useful (with a bit more text),", "I believe it would have been more convincing with a quantitative example to demonstrate e.g. the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case)", "- Sec 5.2: The abstract clams increased performance over MAML", "but the empirical results do not seem to be significantly better than MAML ?", "I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance.", "- Sec 5.2: The authors have left out \u201cMishral et al\u201d from the comparison due to the model being significantly larger than others.", "Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ?", "- Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB.", "Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models,", "could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ?", "Minor: - Sec 4.1 \u201c\u2026each integral in the sum in (2)\u2026\u201d eq 2 is a product"], "labels": ["evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "request", "request", "request", "request", "fact", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "fact", "request", "evaluation", "request", "fact", "evaluation", "evaluation", "fact", "request", "fact", "fact", "request", "request"]} {"doc_id": "SkxHS_vlz", "text": ["Summary of paper: The authors present a novel attack for generating adversarial examples, deemed OptMargin, ", "in which the authors attack an ensemble of classifiers created by classifying at random L2 small perturbations. ", "They compare this optimization method with two baselines in MNIST and CIFAR, and provide an analysis of the decision boundaries by their adversarial examples, the baselines and non-altered examples. ", "Review summary: I think this paper is interesting. ", "The novelty of the attack is a bit dim, ", "since it seems it's just the straightforward attack against the region cls defense. ", "The authors fail to include the most standard baseline attack, namely FSGM. ", "The authors also miss the most standard defense, training with adversarial examples. ", "As well, the considered attacks are in L2 norm, ", "and the distortion is measured in L2, ", "while the defenses measure distortion in L_\\infty (see detailed comments for the significance of this if considering white-box defenses). ", "The provided analysis is insightful, ", "though the authors mostly fail to explain how this analysis could provide further work with means to create new defenses or attacks.", "If the authors add FSGM to the batch of experiments (especially section 4.1) and address some of the objections I will consider updating my score.", "A more detailed review follows.", "Detailed comments: - I think the novelty of the attack is not very strong. ", "The authors essentially develop an attack targeted to the region cls defense. ", "Designing an attack for a specific defense is very well established in the literature, ", "and the fact that the attack fools this specific defense is not surprising.", "- I think the authors should make a claim on whether their proposed attack works only for defenses that are agnostic to the attack (such as PGD or region based), or for defenses that know this is a likely attack (see the following comment as well). ", "If the authors want to make the second claim, training the network with adversarial examples coming from OptMargin is missing.", "- The attacks are all based in L2, ", "in the sense that the look for they measure perturbation in an L2 sense (as the paper evaluation does), ", "while the defenses are all L_\\infty based ", "(since the region classifier method samples from a hypercube, and PGD uses an L_\\infty perturbation limit). ", "This is very problematic if the authors want to make claims about their attack being effective under defenses that know OptMargin is a possible attack.", "- The simplest most standard baseline of all (FSGM) is missing. ", "This is important to compare properly with previous work.", "- The fact that the attack OptMargin is based in L2 perturbations makes it very susceptible to a defense that backprops through the attack. ", "This and / or the defense of training to adversarial examples is an important experiment to assessing the limitations of the attack. ", "- I think the authors rush to conclude that \"a small ball around a given input distance can be misleading\". ", "Wether balls are in L2 or L_\\infty, or another norm makes a big difference in defense and attacks, ", "given that they are only equivalent to a multiplicative factor of sqrt(d) where d is the dimension of the space, and we are dealing with very high dimensional problems. ", "I find the analysis made by the authors to be very simplistic.", "- The analysis of section 4.1 is interesting, it was insightful and to the best of my knowledge novel. ", "Again I would ask the authors to make these plots for FSGM. ", "Since FSGM is known to be robust to small random perturbations, ", "I would be surprised that for a majority of random directions, the adversarial examples are brought back to the original class.", "- I think a bit more analysis is needed in section 4.2. ", "Do the authors think that this distinguishability can lead to a defense that uses these statistics? ", "If so, how?", "- I think the analysis of section 5 is fairly trivial. ", "Distinguishability in high dimensions is an easy problem (as any GAN experiment confirms, see for example Arjovsky & Bottou, ICLR 2017), ", "so it's not surprising or particularly insightful that one can train a classifier to easily recognize the boundaries.", "- Will the authors release code to reproduce all their experiments and methods?", "Minor comments: - The justification of why OptStrong is missing from Table2 (last three sentences of 3.3) ", "should be summarized in the caption of table 2 (even just pointing to the text), ", "otherwise a first reader will mistake this for the omission of a baseline.", "- I think it's important to state in table 1 what is the amount of distortion noticeable by a human."], "labels": ["evaluation", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "fact", "evaluation", "request", "request", "request", "evaluation", "evaluation", "evaluation", "non-arg", "evaluation", "request", "evaluation", "request"]} {"doc_id": "SkzzOIcez", "text": ["The authors propose a loss that is based on a RBF loss for metric learning and incorporates additional per exemplar weights in the index for classification. ", "Significant improvements over softmax are shown on several datasets.", "IMHO, this could be a worthwhile paper, ", "but the framing of the paper into existing literature is lacking ", "and thus it appears as if the authors are re-inventing the wheel (NCA loss) under a different name (RBF solver).", "The specific problems are: - The authors completely miss the connection to NCA loss (https://papers.nips.cc/paper/2566-neighbourhood-components-analysis.pdf) ", "and thus appear to be re-inventing the wheel.", " - The proposed metric learning scenario is exactly as proposed in the NCA loss works, ", "while the classification approach adds an interesting twist by learning per exemplar weights. ", "I haven't encountered this before and it could make an interesting proposal. ", "Of course the benefit of this should be evaluated in ablation studies", "( Tab 3 shows one experiment with marginal improvements).", "- The authors' use of 'solver' seems uncommon and confusing. ", "What is proposed is a loss in addition to building a weighted index in the case of classification.", "- In the metric learning comparison with softmax (end of page 9) the authors mentions that a Gaussian standard deviation for softmax is learned. ", "It appears as if the authors use the softmax logits as embedding ", "whereas the more common approach is to use the bottleneck layer. ", "This is also indicated by the discussion at the end of page 10 where the authors mention that softmax is restricted to axis aligned embeddings. ", "All softmax metric learning experiments should be carried out on appropriately sized bottleneck layers.", "- Some of the motivations of what the various methods learn seem flawed, ", "e.g. triplet loss CAN learn multiple modes per class and there is nothing in the Softmax loss that encourages the classes to fill a large region of the space.", "- Why don't the authors compare on ImageNet?", "Some positive points: - The authors mention in Sec 3.3 that updating the RBF centres is not required. ", "This is a crucial point that should be made a centerpiece of this work, ", "as there are many metric learning works that struggle with this. ", "Additional experiments that can investigate this point would greatly contribute to a well rounded paper.", "- The numbers reported in Tab 1 show very significant improvements", "If the paper was re-framed and builds on top of the already existing NCA loss, there could be valuable contributions in this paper. ", "The experimental comparisons are lacking in some respect, ", "as the comparison with Softmax as a metric learning method seems uncommon, i.e. using the logits instead of the bottleneck layer. ", "I encourage the authors to extend the paper and flesh out some of the experiments and then submit it again."], "labels": ["fact", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "fact", "fact", "fact", "fact", "request", "evaluation", "fact", "request", "fact", "request", "evaluation", "request", "evaluation", "request", "evaluation", "evaluation", "request"]} {"doc_id": "HkxWKlkgM", "text": ["Although GAN recently has attracted so many attentions, ", "the theory of GAN is very poor. ", "This paper tried to make a new insight of GAN from theories ", "and I think their approach is a good first step to build theories for GAN. ", "However, I believe this paper is not enough to be accepted. ", "The main reason is that the main theorem (Theorem 4.1) is too restrictive.", "1.\tThere is no theoretical result for failed conditions. ", "2.\tTo obtain the theorem, they assume the optimal discriminator. ", "However, most of failed scenarios come from the discriminator dynamics as in Figure 2. ", "3.\tThe authors could make more interesting results using the current ingredients. ", "For instance, I would like to check the conditions on eta and T to guarantee d_TV(G_mu*, G_hat{mu})<= delta_1 when |mu*_1 \u2013 mu*_2| >= delta_2 and |hat{mu}_1 \u2013 hat{mu}_2| >= delta_3. ", "In Theorem 4.1, the authors use the same delta for delta_1, delta_2, delta_3. ", "So, it is not clear which initial condition or target performance makes the eta and T."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "fact", "fact", "request", "request", "fact", "evaluation"]} {"doc_id": "Hkns5PSlM", "text": ["[Summary] The paper is overall well written ", "and the literature review fairly up to date.", "The main issue is the lack of novelty.", "The proposed method is just a straightforward dimensionality reduction based on convolutional and max pooling layers.", "Using CNNs to handle variable length time series is hardly novel.", "In addition, as always with metric learning, why learning the metric if you can just learn the classifier?", "If the metric is not used in some compelling application, I am not convinced.", "[Detailed comments and suggestions] * Since \"assumptions\" is the only subsection in Section 2, ", "I would use \\texbf{Assumptions.} rather than \\subsection{Assumptions}.", "* Same remark for Section 4.1 \"Complexity analysis\".", "* Some missing relevant citations:", "Learning the Metric for Aligning Temporal Sequences. Damien Garreau, R\u00e9mi Lajugie, Sylvain Arlot, Francis Bach. In Proc. of NIPS 2014.", "Deep Convolutional Neural Networks On Multichannel Time Series For Human Activity Recognition. Jian Bo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, Shonali Krishnaswamy. In Proc. of IJCAI 2015.", "Time Series Classification Using Multi-Channels Deep Convolutional Neural Networks Yi ZhengQi LiuEnhong ChenYong GeJ. Leon Zhao. In Proc. of International Conference on Web-Age Information Management.", "Soft-DTW: a Differentiable Loss Function for Time-Series. Marco Cuturi, Mathieu Blondel. In Proc. of ICML 2017."], "labels": ["evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "request", "request", "evaluation", "reference", "reference", "reference", "reference"]} {"doc_id": "SJzxff6xM", "text": ["This paper proposes the use of an ensemble of regression SVM models to predict the performance curve of deep neural networks. ", "This can be used to determine which model should be trained (further). ", "The authors compare their method, named Sequential Regression Models (SRM) in the paper, to previously proposed methods such as BNN, LCE and LastSeenValue and claim that their method has higher accuracy and less time complexity than the others. ", "They also use SRM in combination with a neural network meta-modeling method and a hyperparameter optimization one and show that it can decrease the running time in these approaches to find the optimized parameters.", "Pros: The paper is proposing a simple yet effective method to predict accuracy. ", "Using SVM for regression in order to do accuracy curve prediction was for me an obvious approach, ", "I was surprised to see that no one has attempted this before. ", "Using features sur as time-series (TS), Architecture Parameters (AP) and Hyperparameters (HP) is appropriate, ", "and the study of the effect of these features on the performance has some value. ", "Joining SRM with MetaQNN is interesting ", "as the method is a computation hog that can benefit from such refinement. ", "The overall structure of the paper is appropriate. ", "The literature review seems to cover and categorize well the field.", "Cons: I found the paper difficult to read. ", "In particular, the SRM method, which is the core of the paper, is not described properly, ", "I am not able to make sense of the description provided in Sec. 3.1. ", "The paper is not talking about the weaknesses of the method at all. ", "The practicability of the method can be controversial, ", "the number of attempts require to build the (meta-)training set of runs can be huge and lead to something that would be much more costful that letting the runs going on for more iterations. ", "Questions: 1. The approach of sequential regression SVM is not explained properly. ", "Nothing was given about the combination weights of the method. ", "How is the ensemble of (1-T) training models trained to predict the f(T)?", "2. SRM needs to gather training samples which are 100 accuracy curves for T-1 epochs. ", "This is the big challenge of SRM ", "because training different variations of a deep neural networks to T-1 epochs can be a very time consuming process. ", "Therefore, SRM has huge preparing training dataset time complexity that is not mentioned in the paper. ", "The other methods use only the first epochs of considered deep neural network to guess about its curve shape for epoch T. ", "These methods are time consuming in prediction time. ", "The authors compare only the prediction time of SRM with them ", "which is really fast. ", "By the way still, SRM is interesting method if it can be trained once and then be used for different datasets without retraining. ", "Authors should show these results for SRM. ", "3. Discussing about the robustness of SRM for different depth is interesting ", "and I suggest to prepare more results to show the robustness of SRM to violation of different hyperparameters. ", "4. There is no report of results on huge datasets like big Imagenet ", "which takes a lot of time for deep training and we need automatic advance stopping algorithms to tune the hyper parameters of our model on it.", "5. In Table 2 and Figure 3 the results are reported with percentage of using the learning curve. ", "To be more informative they should be reported by number of epochs, in addition or not to percentage.", "6. In section 4, the authors talk about estimating the model uncertainty in the stopping point and propose a way to estimate it. ", "But we cannot find any experimental results that is related to the effectiveness of proposed method and considered assumptions.", "There are also some typos. ", "In section 3.3 part Ablation Study on Features Sets, line 5, the sentence should be \u201cAp are more important than HP\u201d."], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "request", "fact", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "request", "fact", "evaluation", "fact", "request", "fact", "fact", "request", "request"]} {"doc_id": "B1chwjFlz", "text": ["The authors propose a scheme to generate questions based on some answer sentences, topics and question types. ", "Topics are extracted from questions using similar words in question-answer pairs. ", "It is similar to what we find in some Q&A systems (like lexical answer types in Watson). ", "A sequence classifier is also used to tag the presence of topic words. ", "Question types correspond mostly to salient questions words. ", "LSTMs are used to encode the various inputs and generate the questions. ", "The paper is well written and easy to follow. ", "I would expect more explanations why sentence classification and labeling results presented in Table 2 are so low. ", "Experimental results on question generation are convincing and clearly indicate that the approach is effective to generate relevant and well-structured short questions. ", "The main weakness of the paper is the selected set of question types that seems to be a fuzzy combination of answer types and question types (for ex. yes/no). ", "Some questions type can be highly ambiguous; ", "for instance \u201cWhat\u201d might lead to a definition, a quantity, some named entities... ", "Hence I suggest you revise your qt set. ", "I would also suggest, for your next experiments, that you try to generate questions leading to answers with list of values."], "labels": ["fact", "fact", "evaluation", "fact", "fact", "fact", "evaluation", "request", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request"]} {"doc_id": "HJ_BtypgM", "text": ["This paper offers a very promising approach to the processing of the type of sequences we find in dialogues, somewhat in between RNNs which have problem modeling memory, and memory networks whose explicit modeling of the memory is too rigid.", "To achieve that, the starting point seems to be a strength GRU that has the ability to dynamically add memory banks to the original dialogue and question sentence representations, thanks to the use of imperative DNN programming. ", "The use of the reparametrization trick to enable global differentiability is reminiscent of an ICLR'17 paper \"Learning graphical state transitions\". ", "Compared to the latter, the current paper seems to offer a more tractable architecture and optimization problem that does not require strong supervision and should be much faster to train.", "Unfortunately, this is the best understanding I got from this paper, ", "as it seems to be in such a preliminary stage that the exact operations of the SGRU are not parsable. ", "Maybe the authors have been taken off guard by the new review process where one can no longer improve the manuscript during this 2017 review ", "(something that had enabled a few paper to pass the 2016 review).", "After a nice introduction, ", "everything seems to fall apart in section 4, as if the authors did not have time to finish their write-up. ", "- N is both the number of sentences and number of word per sentence, ", "which does not make sense.", "- i iterates over both the sentences and the words. ", "The critical SGRU algorithm is impossible to parse", "- The hidden vector sigma, which is usually noted h in the GRU notation, is not even defined", "- The critical reset gate operation in Eq.(6) is not even explained, and modified in a way I do not understand compared to standard GRU.", "- What is t? ", "From algorithm 1 in Appendix A, it seems to correspond to looping over both sentences and words.", "- The most novel and critical operation of this SGRU, to process the entities of the memory bank, is not even explained. ", "All we get at the end of section 4.2 is \" After these steps are finished, all entities are passed through the strength modified GRU (4.1) to recompute question relevance.\"", "The algorithm in Appendix A does not help much. ", "With PyTorch being so readable, I wish some source code had been made available.", "Experiments reporting also contains unacceptable omissions and errors:", "- The definition of 'failed task', essential for understanding, is not stated (more than 5% error)", "- Reported numbers of failed tasks are erroneous: ", "it should be 1 for DMN+ and 3 for MemN2N.", "Page 3: dynanet -> dynet"], "labels": ["evaluation", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "fact", "fact", "request", "evaluation", "evaluation", "fact", "evaluation", "request", "evaluation", "fact", "fact", "request", "request"]} {"doc_id": "ry6zwoief", "text": ["Summary The authors argue that ensemble prediction takes too much computation time and resource, especially in the case of deep neural networks. ", "They then address the problem by proposing an adaptive prediction approach. ", "The approach is based on the observation that it is most important for ensemble approaches to focus on the \"uncertain\" examples. ", "The proposed approach thus conducts early-stopping prediction when the confidence (certainty) of the prediction is high enough, where the confidence is based on the confidence intervals of (multi-class) labels based on the student-t distribution. ", "Experiments on vision datasets demonstrate that the proposed approach is effective in reducing computation resources while maintaining sufficient accuracy.", "Comments * The experiments are limited in the scope of (image) multi-class classification. ", "It is not clear whether the proposed approach is effective for other classification tasks, or even more sophisticated tasks like multi-label classification or sequence tagging.", "* The idea appears elegant but rather straightforward. ", "One important baseline that is easy but not discussed is to set a static threshold on pairwise comparison (p_max - p_secondmax). ", "Would this baseline be competitive with the proposed approach? ", "Such a comparison is able to demonstrate the benefits of using confidence interval.", "* The overall improvement in computation time seems to be within a constant scale, ", "which can be easily achieved by doing ensemble prediction in parallel ", "(note that the proposed approach would require predicting sequentially). ", "So are there real applications that can benefit from the improvement?", "* typo: p4, line19, neural \"netowkrs\" -> neural \"networks\""], "labels": ["fact", "fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "evaluation", "request", "evaluation", "fact", "evaluation", "fact", "request", "request"]} {"doc_id": "SJNicmVeM", "text": ["The paper presents Erdos-Selfridge-Spencer games as environments for investigating deep reinforcement learning algorithms. ", "The proposed games are interesting and clearly challenging, ", "but I am not sure what they tell us about the algorithms chosen to test them. ", "There are some clarity issues with the justification and evaluation which undermine the message the authors are trying to make.", "In particular, I have the following concerns: \u2022 these games have optimal policies that are expressible as a linear model, meaning that if the architecture or updating of the learning algorithm is such that there is a bias towards exploring these parts of policy space, then they will perform better than more general algorithms. ", "What does this tell us about the relative merits of each approach? ", "The authors could do more to formally motivate these games as \"difficult\" for any deep learning architecture if possible.", " \u2022 the authors compare linear models with non-linear models at some point for attacker policies, ", "but it is unclear whether these linear models are able to express the optimal policy. ", "In fact, there is a level of non-determinism in how the attacker policies are encoded which means that an optimal policy cannot be (even up to soft-max) expressed by the agent ", "(as I read things the number of pieces chosen in level l is always chosen uniformly randomly).", " \u2022 As the authors state, this paper is an empirical evaluation, ", "and the theorems presented are derived from earlier work. ", "There is possibly too much focus on the proofs of these theorems.", " \u2022 There are a number of ambiguities and errors which places difficulties on the interpretation (and potential replication) of the experiments. ", "As this is an empirical study, ", "this is the yardstick by which the paper should be judged. ", "In particular, this relates to: \u25e6 The architecture of each of the tested Deep RL methods.", " \u25e6 What is done to select appropriate tuning parameters of the tested Deep RL methods, if anything.", " \u25e6 It is unclear whether 'incorrect actions' in the supervised learning evaluations, refer to non-optimal actions, or simply actions that do not preserve the dominance of the defender, e.g. both partitions may have potential >0.5", " \u25e6 Fig 4. right looks like a reward signal, ", "but is labelled Proportion correct. ", "The text is not clear enough to be sure which it is.", " \u25e6 Fig 4. left and right has 4 methods: rl rewards, rl correct actions, sup rewards, and sup correct actions. ", "The specifics of how these methods are constructed is unclear from the paper.", " \u25e6 What parts of the evaluation explores how well these methods are able to represent the states (feature/representation learning) ", "and what parts are evaluating the propagation of sparse rewards (the reinforcment learning core)? ", "The authors could be clearer and more targetted with respect to this question.", "There is value in this work, ", "but in its current state I do not think it is ready for publicaiton.", "# Detailed notes [p4, end of sec 3] The authors say that the difficulty of the games can be varied with \"continuous changes in potential\", ", "but the potential is derived from the discrete initial game state, ", "so these values are not continuously varying (even though it is possible to adjust them by non-integer amounts).", "[p4, sec 4.1] \"strategy unevenly partitions the occupied levels...with the proportional difference between the two sets being sampled randomly\"", "What is meant by this? ", "The proportional difference between the two sets is discussed as if it is a continuous property, ", "but must be chosen from the discrete set of all available partitions. ", "If one partition one is chosen uniformly randomly from all possibly sets A, B (and the potential proportion calculated) then I don't know why it would be written in this way. ", "That suggests that proportions that are closer to 1:1 are chosen more often than \"extreme\" partitions, ", "but how? ", "This feels a little under-justified.", "\"very different states A, B (uneven potential, disjoint occupied levels)\"", "Are these states really \"very different\", or at least for the reasons indicated. ", "Later on (Theorem 3) we see how an optimal partition is generated. ", "This chooses a partition where one part contains all pieces in layer (l+1) and above and one part with all pieces in layer (l-1) and below, with layer l being distributed between the two parts. ", "The first part will typically have a slightly lower potential than the other ", "and all layers other than layer l will be disjoint.", "[p6, Fig 4] The right plot y-limits vary between -1 and 1 ", "so it cannot represent a proportion of correct actions. ", "Also, in the text the authors say:", " >> The results, shown in Figure 4 are surprising. Reinforcement learning >> is better at playing the game, but does worse at predicting optimal moves.", "I am not sure which plot shows the playing of the game. ", "Is this the right hand plot? ", "In which case are we looking at rewards? ", "In fact, I am a little confused as to what is being shown here. ", "Is \"sup rewards\" a supervised learning method trained on rewards, or evaluated on rewards, or both? ", "And how is this done. ", "The text is just not clear enough.", "[p7 Fig 6 and text] Here the authors are comparing how well agents select the optimal actions as compared to how close they are to the end of the game. ", "This relates to the \"surprising\" fact that \"Reinforcement learning is better at playing the game, but does worse at predicting optimal moves.\". ", "I think an important point here is how many training/test examples there are in each bin. ", "If there are more in the range 3-7 moves from the end of the game, than there are outside this range, then the supervised learner will", "[p8 proof of theorem 3] \"\u03c6(A l+1 ) < 0.5 and \u03c6(A l ) > 0.5.\"", "Is it true that both these inequalities are strict?", "\"Since A l only contains pieces from levels K to l + 1\"", "In fact this should read from levels K to l.", "\"we can move k < m \u2212 n pieces from A l+1 to A l\"", "Do the authors mean that we can define a partition A, B where A = A_{l+1} plus some (but not all) elements in level l (A_{l}\\setminus A_{l+1})?", "\"...such that the potential of the new set equals 0.5\"", "It will equal exactly 0.5 as suggested, ", "but the authors could make it more precise as to why (there is a value n+k < l (maybe <=l) such that (n+k)*2^{-(K-l+1)}=0.5 (guaranteed). ", "They should also indicate why this then justifies their proof (namely that phi(S0)-0.5 >= 0.5).", "[p8 paramterising action space] A comment: this doesn't give as much control as the authors suggest. ", "Perhaps the agent should also chose the proportion of elements in layer l to set A. ", "For instance, if there are a large number of elements in l, and or phi(A_{l+1}) is very close to 0.5 (or phi(A_l) is very close to 0.5) then this doesn't give the attacker the opportunity to fine tune the policy to select very good partitions. ", "It is unclear expected level of control that agents have under various conditions (K and starting states).", "[p9 Fig 8] As the defender's score is functionally determined by the attackers score, ", "it doesn't help to include this on the plot. ", "It just distracts from the signal."], "labels": ["fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "request", "fact", "evaluation", "fact", "fact", "fact", "fact", "evaluation", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "evaluation", "fact", "evaluation", "fact", "evaluation", "request", "request", "request", "evaluation", "evaluation", "fact", "fact", "fact", "quote", "request", "evaluation", "fact", "evaluation", "evaluation", "request", "evaluation", "quote", "evaluation", "fact", "fact", "evaluation", "fact", "fact", "fact", "fact", "quote", "evaluation", "non-arg", "request", "evaluation", "request", "request", "evaluation", "fact", "evaluation", "evaluation", "fact", "quote", "request", "quote", "request", "quote", "request", "quote", "fact", "request", "request", "evaluation", "request", "evaluation", "evaluation", "fact", "request", "evaluation"]} {"doc_id": "r14eWGtez", "text": ["The authors propose a technique to compress LSTMs in RNNs by using a group Lasso regularizer which results in structured sparsity, by eliminating individual hidden layer inputs at a particular layer.", "The authors present experiments on unidirectional and bidirectional LSTM models which demonstrate the effectiveness of this method.", "The proposed techniques are evaluated on two models: a fairly large LSTM with ~66.0M parameters, as well as a more compact LSTM with ~2.7M parameters, which can be sped up significantly through compression.", "Overall this is a clearly written paper that is easy to follow, with experiments that are well motivated.", "To the best of my knowledge most previous papers in the area of RNN compression focus on pruning or compression of the node outputs/connections, but do not focus as much on reducing the computation/parameters within an RNN cell.", "I only have a few minor comments/suggestions which are listed below:", "1. It is interesting that the model structure where the number of parameters is reduced to the number of ISSs chosen from the proposed procedure does not attain the same performance as when training with a larger number of nodes, with the group lasso regularizer.", "It would be interesting to conduct experiments for a range of \\lambda values: i.e., to allow for different degrees of compression, and then examine whether the model trained from scratch with the \u201coptimal\u201d structure achieves performance closer to the ISS-based strategy, for example, for smaller amounts of compression, this might be the case?", "2. In the experiment, the authors use a weaker dropout when training with ISS.", "Could the authors also report performance for the baseline model if trained with the same dropout (but without the group LASSO regularizer)?", "3. The colors in the figures: especially the blue vs. green contrast is really hard to see.", "It might be nicer to use lighter colors, which are more distinct.", "4. The authors mention that the thresholding operation to zero-out weights based on the hyperparameter \\tau is applied \u201cafter each iteration\u201d.", "What is an iteration in this context?", "An epoch, a few mini-batch updates, per mini-batch?", "Could the authors please clarify.", "5. Clarification about the hyperparameter \\tau used for sparsification: Is \\tau determined purely based on the converged weight values in the model when trained without the group LASSO constraint?", "It would be interesting to plot a histogram of weight values in the baseline model, and perhaps also after the group LASSO regularized training.", "6. Is the same value of \\lambda used for all groups in the model?", "It would be interesting to consider the effect of using stronger sparsification in the earlier layers, for example.", "7. Section 4.2: Please explain what the exact match (EM) and F1 metrics used to measure performance of the BIDAF model are, in the text.", "Minor Typographical/Grammatical errors: - Sec 1: \u201c... in LSTMs meanwhile maintains the dimension consistency.\u201d \u2192 \u201c... in LSTMs while maintaining the dimension consistency.\u201d", "- Sec 1: \u201c... is public available\u201d \u2192 \u201cis publically available\u201d", "- Sec 2: Please rephrase: \u201cAfter learning those structures, compact LSTM units remain original structural schematic but have the sizes reduced.\u201d", "- Sec 4.1: \u201cThe exactly same training scheme of the baseline ...\u201d \u2192 \u201cThe same training scheme as the baseline ...\u201d"], "labels": ["fact", "fact", "fact", "evaluation", "evaluation", "evaluation", "evaluation", "request", "fact", "request", "evaluation", "request", "fact", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request", "request"]}