id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
iclr_2018_B1EVwkqTW | While deep neural networks have shown outstanding results in a wide range of applications, learning from a very limited number of examples is still a challenging task. Despite the difficulties of the few-shot learning, metric-learning techniques showed the potential of the neural networks for this task. While these methods perform well, they don't provide satisfactory results. In this work, the idea of metric-learning is extended with Support Vector Machines (SVM) working mechanism, which is well known for generalization capabilities on a small dataset. Furthermore, this paper presents an end-to-end learning framework for training adaptive kernel SVMs, which eliminates the problem of choosing a correct kernel and good features for SVMs. Next, the one-shot learning problem is redefined for audio signals. Then the model was tested on vision task (using Omniglot dataset) and speech task (using TIMIT dataset) as well. Actually, the algorithm using Omniglot dataset improved accuracy from 98.1% to 98.5% on the one-shot classification task and from 98.9% to 99.3% on the few-shot classification task. | Make SVM great again with Siamese kernel for few-shot learning
** PAPER SUMMARY **
The author proposes to combine siamase networks with an SVM for pair classification. The proposed approach is evaluated on few shot learning tasks, on omniglot and timit.
** REVIEW SUMMARY **
The paper is readable but it could be more fluent. It lacks a few references and important technical aspects are not discussed. It contains a few errors. Empirical contribution seems inflated on omniglot as the authors omit other papers reporting better results. Overall, the contribution is modest at best.
** DETAILED REVIEW **
On mistakes, it is wrong to say that an SVM is a parameterless classifier. It is wrong to cite (Boser et al 92) for the soft-margin SVM. I think slack variables come from (Cortes et al 95). "consistent" has a specific definition in machine learning https://en.wikipedia.org/wiki/Consistent_estimator , you must use a different word in 3.2. You mention that a non linear SVM need a similarity measure, it actually need a positive definite kernel which has a specific definition, https://en.wikipedia.org/wiki/Positive-definite_kernel .
On incompleteness, it is not obvious how the classifier is used at test time. Could you explain how classes are predicted given a test problem? The setup of the experiments on TIMIT is extremely unclear. What are the class you are interested in? How many classes and examples does the testing problems have?
On clarity, I do not understand why you talk again about non-linear SVM in the last paragraph of 3.2. since you mention at the end of page 4 that you will only rely on linear SVMs for computational reasons. You need to mention explicitely somewhere that (w,\theta) are optimized jointly. The sentence "this paper investigates only the one versus rest approach" is confusing, as you have only two classes from the SVM perspective i.e. pairs (x1,x2) where both examples come from the same class and pairs (x1,x2) where they come from different class. So you use a binary SVM, not one versus rest. You need to find a better justification for using L2-SVM than "L2-SVM loss variant is considered to be the best by the author of the paper", did you try classical SVM and found them performing worse? Also could you motivate your choice for L1 norm as opposed to L2 in Eq 3?
On empirical evaluation, I already mentioned that it impossible to understand what the classification problem on TIMIT is. I suspect it might be speaker identification. So I will focus on the omniglot experiments.
Few-Shot Learning Through an Information Retrieval Lens, Eleni Triantafillou, Richard Zemel, Raquel Urtasun, NIPS 2017 [arxiv July'17]
and the reference therein give a few more recent baselines than your table. Some of the results are better than your approach. I am not sure why you do not evaluate on mini-imagenet as well as most work on few shot learning generally do. This dataset offers a clearer experimental setup than your TIMIT setting and has abundant published baseline results. Also, most work typically use omniglot as a proof of concept and consider mini-imagenet as a more challenging set. |
iclr_2018_SJTB5GZCb | The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists. Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections. We present a simple two-phase learning procedure for fixed point recurrent networks that addresses both these issues. In our model, neurons perform leaky integration and synaptic weights are updated through a local mechanism. Our learning method extends the framework of Equilibrium Propagation to general dynamics, relaxing the requirement of an energy function. As a consequence of this generalization, the algorithm does not compute the true gradient of the objective function, but rather approximates it at a precision which is proven to be directly related to the degree of symmetry of the feedforward and feedback weights. We show experimentally that the intrinsic properties of the system lead to alignment of the feedforward and feedback weights, and that our algorithm optimizes the objective function. | The manuscript discusses a learning algorithm that is based on the equilibrium propagation method, which can be applied to networks with asymmetric connections. This extension is interesting, but the results seem to be incomplete and missing necessary additional analyses. Therefore, I do not recommend acceptance of the manuscript in its current form. The main issues are:
1) The theoretical result is incomplete since it fails to show that the algorithm converges to a meaningful learning result. Also the experimental results do not sufficiently justify the claims.
2) The paper further makes statements about the performance and biological plausibility of the proposed algorithm that do not hold without additional justification.
3) The paper does not sufficiently discuss and compare the relevant neuroscience literature and related work.
Details to major points:
1) The presentation of the theoretical results is misleading. Theorem 1 shows that the proposed neuron dynamics has a fixed point that coincides with a local minimum of the objective function if the weights are symmetric. However, this was already clear from the original equilibrium propagation paper. The interesting question is whether the proposed algorithm automatically converges to the condition of symmetric weights, which is left unanswered. In Figure 3 experimental evidence is provided, but the results are not convincing given that the weight alignment only improves by ~1° throughout learning (compared to >45° in Lillicrap et al., 2017). It is even unclear to me if this effect is statistically significant. How many trials did the authors average over here? The authors should provide standard statistical significance measures for this plot. Since no complete theoretical guarantees are provided, a much broader experimental study would be necessary to justify the claims made in the paper.
2) Throughout the paper it is claimed that the proposed learning algorithm is biologically plausible. However, this argument is also not sufficiently justified. Most importantly, it is unclear how the proposed algorithm would behave in a biologically realistic recurrent networks and it is unclear how the different learning phases should be realized in the brain.
Neural networks in the brain are abundantly recurrent. Even in the layered structure of the neocortex one finds dense lateral connectivity between neurons on each layer. It is not clear to me how the proposed algorithm could be applied to such networks. In a recurrent network, rolled-out over time, information would need to be passed forward and backwards in time. The proposed algorithm does not seem to provide a solution to this temporal credit assignment problem. Also in the experiments the algorithm is applied only to feedforward architectures. What would happen if recurrent networks were used to learn temporal tasks like TIMIT? Please discuss.
In the discussion on page 8 the authors further argue that the learning phases of the proposed algorithm could be implemented in the cortex through theta waves that modulate long-term plasticity. To support this theory the authors cite the results from Orr et al., 2001, where hippocampal place cells in behaving rats were studied. To my knowledge there is no consensus on the precise nature of this modulation of plasticity. E.g. in Wyble et al. 2003, it was observed that application of learning protocols at different phases of theta waves actually leads to a sign change in learning, i.e. long term potentiation was modulated to depression. It seems to me that the algorithm is not compatible with these other experimental findings, since gradients only point in the correct direction towards the final phase and any non-zero learning rate in other phases would therefore perturb learning. Did the authors try non-optimal learning rate schedules in the experiments (including sign change etc.) to test the robustness of the proposed algorithm? Also to my knowledge, the modulatory effect of theta rhythms has so far only been described in the CA1 region of rodent hippocampus which is a very specialized region of the brain (see Hanslmayr et al., 2016, for a review and a modern hypothesis on the role of theta rhythms in the brain).
Furthermore, the discussion of the possible implementation of the learning algorithm in analog hardware on page 8 is missing an explanation of how the different learning phases of the algorithm are controlled on the chip. One of the advantages of analog hardware is that it does not require global clocking, unlike classical digital hardware, which is expensive in wiring and energy requirement. It seems to me that this advantage would disappear if the algorithm was brought to an analog chip, since global information about the learning phase has to be communicated to each synapse. Is there an alternative to a global wiring scheme to convey this information throughout the whole chip? Please discuss this in more depth.
3) The authors apply the learning algorithm only to the MNIST dataset, which is a relatively simple task. Similar results were also achieved using random feedback alignment (Lillicrap et al., 2017). Also, the evolutionary strategies method (Salimans et al., 2017), was recently used for learning deep networks and applied to complex reinforcement learning problems and could likewise also be applied to simple classification tasks. Both these methods are arguably as simple and biologically plausible as the proposed algorithm. It would be good to try other standard benchmark tasks and report and compare the performance there. Furthermore, the paper is missing a broader related work section that discusses approaches for biologically plausible learning rules for deep neural architectures.
Minor points:
The proposed algorithm uses different learning rates that shrink exponentially with the layer number. Have the authors explored whether the algorithm works for really deep architectures with several tens of layers? It seems to me that the used learning rate heuristic may hinder scalability of equilibrium propagation.
On page 5 the authors write: "However we observe experimentally that the dynamics almost always converges." This needs to be quantified. Did the authors find that the algorithm is very sensitive to initial conditions?
References:
Bradley P. Wyble, Vikas Goyal, Christina A. Rossi, and Michael E. Hasselmo. Stimulation in Hippocampal Region CA1 in Behaving Rats Yields Long-Term Potentiation when Delivered to the Peak of Theta and Long-Term Depression when Delivered to the Trough James M. Hyman. Journal of Neuroscience. 2003.
Simon Hanslmayr, Bernhard P. Staresina, and Howard Bowman. Oscillations and Episodic Memory: Addressing the Synchronization/Desynchronization Conundrum. Trends in Neurosciences. 2016.
Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. Arxiv. 2017. |
iclr_2018_SJIA6ZWC- | Machine learning models are usually tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of both weights and hyperparameters. Our method trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our method converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters. | *Summary*
The paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters to the weights of the model we wish to tune.
The paper gives a theoretical justification of its approach, and then describes several variants of its core algorithm which mix the training of the hyper-networks together with the optimization of the hyper-parameters themselves. Finally, experiments based on MNIST illustrate the properties of the proposed approach.
While the core idea may appear as appealing, the paper suffers from several flaws (as further detailed afterwards):
-Insufficient related work
-Correctness/rigor of Theorem 2.1
-Clarity of the paper (e.g., Sec. 2.4)
-Experiments look somewhat artificial
-How scalable is the proposed approach in the perspective of tuning models way larger/more complex than those treated in the experiments?
*Detailed comments*
-"...and training the model to completion." and "This is wasteful, since it trains the model from scratch each time..." (and similar statement in Sec. 2.1): Those statements are quite debatable. There are lines of work, e.g., in Bayesian optimization, to model early stopping/learning curves (e.g., Domhan2014, Klein2017 and references therein) and where training procedures are explicitly resumed (e.g., Swersky2014, Li2016). The paper should reformulate its statements in the light of this literature.
-"Uncertainty could conceivably be incorporated into the hypernet...". This seems indeed an important point, but it does not appear as clear how to proceed (e.g., uncertainty on w_phi(lambda) which later needs to propagated to L_val); could the authors perhaps further elaborate?
-I am concerned about the rigor/correctness of Theorem 2.1; for instance, how is the continuity of the best-response exploited? Also, throughout the paper, the argmin is defined as if it was a singleton while in practice it is rather a set-valued mapping (except if there is a unique minimizer for L_train(., lambda), which is unlikely to be the case given the nature of the considered neural-net model). In the same vein, Jensen's inequality states that Expectation[g(X)] >= g(Expectation[X]) for some convex function g and random variable X; how does it precisely translate into the paper's setting (convexity, which function g, etc.)?
-Specify in Alg. 1 that "hyperopt" refers to a generic hyper-parameter procedure.
-More details should be provided to better understand Sec. 2.4. At the moment, it is difficult to figure out (and potentially reproduce) the model which is proposed.
-The training procedure in Sec. 4.2 seems quite ad hoc; how sensitive was the overall performance with respect to the optimization strategy? For instance, in 4.2 and 4.3, different optimization parameters are chosen.
-typo: "weight decay is applied the..." --> "weight decay is applied to the..."
-"a standard Bayesian optimization implementation from sklearn": Could more details be provided? (there does not seem to be implementation there http://scikit-learn.org/stable/model_selection.html to the best of my knowledge)
-The experimental set up looks a bit far-fetched and unrealistic: first scalar, than diagonal and finally matrix-weighted regularization schemes. While the first two may be used in practice, the third scheme is not used in practice to the best of my knowledge.
-typo: "fit a hypernet same dataset." --> "fit a hypernet on the same dataset."
-(Franceschi2017) could be added to the related work section.
*References*
(Domhan2014) Domhan, T.; Springenberg, T. & Hutter, F. Extrapolating learning curves of deep neural networks ICML 2014 AutoML Workshop, 2014
(Franceschi2017) Franceschi, L.; Donini, M.; Frasconi, P. & Pontil, M. Forward and Reverse Gradient-Based Hyperparameter Optimization preprint arXiv:1703.01785, 2017
(Klein2017) Klein, A.; Falkner, S.; Springenberg, J. T. & Hutter, F. Learning curve prediction with Bayesian neural networks International Conference on Learning Representations (ICLR), 2017, 17
(Li2016) Li, L.; Jamieson, K.; DeSalvo, G.; Rostamizadeh, A. & Talwalkar, A. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization preprint arXiv:1603.06560, 2016
(Swersky2014) Swersky, K.; Snoek, J. & Adams, R. P. Freeze-Thaw Bayesian Optimization preprint arXiv:1406.3896, 2014
*********
Update post rebuttal
*********
I acknowledge the fact that I read the rebuttal of the authors, whom I thank for their detailed answers.
My minor concerns have been clarified. Regarding the correctness of the proof, I am still unsure about the applicability of Jensen inequality; provided it is true, then it is important to see that the results seem to hold only for particular hyperparameters, namely regularization parameters (as explained in the new updated proof). This limitation should be exposed transparently upfront in the paper/abstract.
Together with the new experiments and comparisons, I have therefore updated my rating from 5 to 6. |
iclr_2018_Hyg0vbWC- | Published as a conference paper at ICLR 2018 GENERATING WIKIPEDIA BY SUMMARIZING LONG SEQUENCES
We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations. | The main significance of this paper is to propose the task of generating the lead section of Wikipedia articles by viewing it as a multi-document summarization problem. Linked articles as well as the results of an external web search query are used as input documents, from which the Wikipedia lead section must be generated. Further preprocessing of the input articles is required, using simple heuristics to extract the most relevant sections to feed to a neural abstractive summarizer. A number of variants of attention mechanisms are compared, including the transofer-decoder, and a variant with memory-compressed attention in order to handle longer sequences. The outputs are evaluated by ROUGE-L and test perplexity. There is also a A-B testing setup by human evaluators to show that ROUGE-L rankings correspond to human preferences of systems, at least for large ROUGE differences.
This paper is quite original and clearly written. The main strength is in the task setup with the dataset and the proposed input sources for generating Wikipedia articles. The main weakness is that I would have liked to see more analysis and comparisons in the evaluation.
Evaluation:
Currently, only neural abstractive methods are compared. I would have liked to see the ROUGE performance of some current unsupervised multi-document extractive summarization methods, as well as some simple multi-document selection algorithms such as SumBasic. Do redundancy cues which work for multi-document news summarization still work for this task?
Extractiveness analysis:
I would also have liked to see more analysis of how extractive the Wikipedia articles actually are, as well as how extractive the system outputs are. Does higher extractiveness correspond to higher or lower system ROUGE scores? This would help us understand the difficulty of the problem, and how much abstractive methods could be expected to help.
A further analysis which would be nice to do (though I have less clear ideas how to do it), would be to have some way to figure out which article types or which section types are amenable to this setup, and which are not.
I have some concern that extraction could do very well if you happen to find a related article in another website which contains encyclopedia-like or definition-like entries (e.g., Baidu, Wiktionary) which is not caught by clone detection. In this case, the problem could become less interesting, as no real analysis is required to do well here.
Overall, I quite like this line of work, but I think the paper would be a lot stronger and more convincing with some additional work.
----
After reading the authors' response and the updated submission, I am satisfied that my concerns above have been adequately addressed in the new version of the paper. This is a very nice contribution. |
iclr_2018_ByQZjx-0- | We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS . Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS. | Summary:
The paper presents a method for learning certain aspects of a neural network architecture, specifically the number of output maps in certain connections and the existence of skip connections. The method is relatively efficient, since it searches in a space of similar architectures, and uses weights sharing between the tested models to avoid optimization of each model from scratch. Results are presented for image classification on Cifar 10 and for language modeling.
Page 3: “for each channel, we only predict C/S binary masks” -> this seems to be a mistake. Probably “for each operation, we only predict C/S binary masks” is the right wording
Page 4: Stabilizing Stochastic Skip Connections: it seems that the suggested configuration does not enable an identity path, which was found very beneficial in (He. et al., 2016). Identity path does not exist since layers are concatenated and go through 1*1 conv, which does not enable plain identity unless learned by the 1*1 conv.
Page 5:
- The last paragraph in section 4.2 is not clear to me. What does a compilation failure mean in this context and why does it occur? And: if each layer is connected to all its previous layers by skip connections, what remains to be learned w.r.t the model structure? Isn’t the pattern of skip connection the thing we would like to learn?
- Some details of the policy LSTM network are also not clear to me:
o How is the integer mask (output of the B channel steps) encoded? Using 1-hot encoding over 2^{C/S} output neurons? Or maybe C/S output neurons, used for sampling the C/S bits of the mask? this should be reported in some detail.
o How is the mask converted to an input embedding for the next step? Is it by linear multiplication with a matrix? Something more complicated? And are there different matrices used/trained for each mask embedding (one for 1*1 conv, one for 3*3 conv, etc..)?
o What is the motivation for using equation 5 for the sampling of skip connection flags? What is the motivation for averaging the winning anchors as the average embedding for the next stage (to let it ‘know’ who is connected to the previous?). Is anchor j also added to the average?
o How is equation 5 normalized? That is: the probability is stated to be proportional to an exponent of an inner product, but it is not clear what the constant is and how sampling is done.
Page 6:
- Section 4.4: what is the fixed policy used for generating models in the stage of training the shared W parameters? (this is answered at page 7
Experiments:
- The accuracy figures obtained are impressive, but I’m not convinced the ENAS learning is the important ingredient in obtaining them (rather than a very good baseline)
- Specifically, in the Cifar -10 example it does not seem that the networks chooses the number of maps in a way which is diverse or different from layer to layer. Therefore we do not have any evidence that the LSTM controller has learnt any interesting rule regarding block type, or relation between block type width and layer index. All we see is that the model does not chose too many maps, thus avoid significant overfit. The relevant baseline here is a model with 64 or 96 maps on each block, each layer.Such a model is likely to do as well as the ENAS model, and can be obtained easily with slight parameter tuning of a single parameter.
- Similarly, I’m not convinced the skip connection pattern found for Cifar-10 is superior to standard denseNet or Resnet pattern. The found configuration was not compared to these baselines. So again we do not have evidence showing the merit of keeping and tuning many parameters with the RINFORCE
- The experiments with Penn Treebank are described with too less details: for example, what exactly is the task considered (in terms on input-output mapping), what is the dataset size, etc..
- Also, for the Penn treebank experiments no baseline is given, so it is not possible to understand if the structure learning here is beneficial. Comparison of the results to an architecture with all skip connections, and with a single skip connection per layer is required to estimate if useful structure is being learnt.
Overall:
- Pro: the method gives high accuracy results
- Cons:
o It is not clear if the ENAS search is responsible to the results, or just the strong baseline. The advantage of ENAS over plain hyper parameter choosing was not sufficiently established.
o The controller was not presented in a clear enough manner. Many of its details stay obscure.
o The method does not seem to be general. It seems to be limited to choosing a specific set of parameters in very specific scenario (scenario which enable parameter sharing between model. The conditions for this to happen seems to be rather strict, and where not elaborated).
After revision:
The controller is now better presented.
However, the main points were not changed:
- ENAS seems to be limited to a specific architecture and search space, in which probably the search is already exhausted. For example for the image processing network, it is determining the number of skip connections and structure of a single layer as a combination of several function types. We already know the answers to these search problems (denser skip connection pattern works better, more functions types in a layer in parallel do better, the number of maps should be adjusted to the complexity and data size to avoid overfit). ENAS does not reveal a new surprising architectures, and it seems that instead of searching in the large space it suggests, one can just tune a 1-2 parameters (for the image network, it is the number of maps in a layer).
- Results comparing ENAS results to the simple baseline of just tuning 1-2 hyper parameters were not shown. I hence believe the strong empirical results of ENAS are a property of the search space (the architecture used) and not of the search algorithm. |
iclr_2018_HyyP33gAZ | Published as a conference paper at ICLR 2018 ACTIVATION MAXIMIZATION GENERATIVE ADVER- SARIAL NETS
Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training. Based on that, we propose Activation Maximization Generative Adversarial Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we demonstrate that, with the Inception ImageNet classifier, Inception Score mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality. We thus propose a new metric, called AM Score, to provide more accurate estimation on the sample quality. Our proposed model also outperforms the baseline methods in the new metric. | This paper is a thorough investigation of various “class aware” GAN architectures. It purposes a variety of modifications on existing approaches and additionally provides extensive analysis of the commonly used Inception Score evaluation metric.
The paper starts by introducing and analyzing two previous class aware GANs - a variant of the Improved GAN architecture used for semi-supervised results (named Label GAN in this work) and AC-GAN, which augments the standard discriminator with an auxiliary classifier to classify both real and generated samples as specific classes.
The paper then discusses the differences between these two approaches and analyzes the loss functions and their corresponding gradients. Label GAN’s loss encourages the generator to assign all probability mass cumulatively across the k-different label classes while the discriminator tries to assign all probability mass to the k+1th output corresponding to a “generated” class. The paper views the generators loss as a form of implicit class target loss.
This analysis motivates the paper’s proposed extension, called Activation Maximization. It corresponds to a variant of Label GAN where the generator is encouraged to maximize the probability of a specific class for every sample instead of just the cumulative probability assigned to label classes. The proposed approach performs strongly according to inception score on CIFAR-10 and includes additional experiments on Tiny Imagenet to further increase confidence in the results.
A discussion throughout the paper involves dealing with the issue of mode collapse - a problem plaguing standard GAN variants. In particular the paper discusses how variants of class conditioning effect this problem. The paper presents a useful experimental finding - dynamic labeling, where targets are assigned based on whatever the discriminator thinks is the most likely label, helps prevent mode collapse compared to the predefined assignment approach used in AC-GAN / standard class conditioning.
I am unclear how exactly predefined vs dynamic labeling is applied in the case of the Label GAN results in Table 1. The definition of dynamic labeling is specific to the generator as I interpreted it. But Label GAN includes no class specific loss for the generator. I assume it refers to the form of generator - whether it is class conditional or not - even though it would have no explicit loss for the class conditional version. It would be nice if the authors could clarify the details of this setup.
The paper additionally performs a thorough investigation of the inception score and proposes a new metric the AM score. Through analysis of the behavior of the inception score has been lacking so this is an important contribution as well.
As a reader, I found this paper to be thorough, honest, and thoughtful. It is a strong contribution to the “class aware” GAN literature. |
iclr_2018_HyI6s40a- | Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples. | Summary:
The paper presents an unsupervised method for detecting adversarial examples of neural networks. The method includes two independent components: an ‘input defender’ which tried to inspect the input, and a ‘latent defender’ trying to inspect a hidden representation. Both are based on the claim that adversarial examples lie outside a certain sub-space occupied by the natural image examples, and modeling this sub-space hence enables their detection. The input defender is based on sparse coding, and the latent defender on modeling the latent activity as a mixture of Gaussians. Experiments are presented on MInst, Cifar10, and ImageNet.
- Introduction: The motivation for detecting adversarial examples is not stated clearly enough. How can such examples be used by a malicious agent to cause damage to a system? Sketching some such scenarios would help the reader understand why the issue is practically important. I was not convinced it is.
Page 4:
- Step 3 of the algorithm is not clear:
o How exactly does HDDA model the data (formally) and how does it estimate the parameters? In the current version, the paper does not explain the HDDA formalism and learning algorithm, which is a main building block in the proposed system (as it provides the density score used for adversarial examples detection). Hence the paper cannot be read as a standalone document. I went on to read the relevant HDDA paper, but it is also not clear which of the model variants presented there is used in this paper.
o What is the relation between the model learned at stage 2 (the centers c^i) and the model learnt by HDDA? Are they completely different models? Or are the C^I used when learning the HDDA model (and how)?
If these are separate models, how are they used in conjunction to give a final density score? If I understand correctly, only the HDDA model is used to get the final score, and the C^i are only used to make the \phy(x) representation more class-seperable. Is that right?
- Figure 4, b and c: it is not clear what the (x,y,z) measurements plotted in these 3D drawings are (what are the axis).
Page 5:
- Section 2: the risk analysis is done in a standard Bayesian way and leads to a ratio of PDFs in equation 5. However, this form is not appropriate for the case presented at this paper, since the method presented only models one of these PDFs (Specifically p(x | W1) - there is not generative model of p(x|W2)).
- The authors claim in the last sentence of the section that p(x|W2) is equivalent to 1-p(x|W1), but this is not true: these are two continuous densities, they do not sum to 1, and a model of p(x|W2) is not available (as far as I understand the method)
Page 6:
- How is equation 7) optimized?
- Which patchs are extracted from images, for training and at inference time? Are these patchs a dense coverage of the image? Sparsely sampled? Densely sampled with overlaps?
- Its not clear enough what exactly is the ‘PSNR’ value which is used for the adversarial example detection, and what exactly is ‘profile the PSNR of legitimate samples within each class’. A formal definition of PSNR and’profiling’ is missing (does profiling simply mean finding a threshold for filtering?)
Page 7:
- Figure 7 is not very informative. Given the ROC curves in figure 8 and table 1 it is redundant.
Page 8:
- The results in general indicate that the method is much better than chance, but it is not clear if it is practical, because the false alarm rates for high detection are quite high. For example on ImageNet, 14.2% of the innocent images are mistakenly rejected as malicious to get 90% detection rate. I do not think this working point is useful for a real application
- Given the high flares alarm rate, it is surprising that experiments with multiple checkpoints are not presented (specifically as this case of multiple checkpoints is discussed explicitly in previous sections of the paper). Experiments with multiple checkpoints are clear required to complete the picture regarding the empirical performance of this method
- The experiments show that essentially, the latent defenders are stronger than the input defender in most cases. However, an ablation study of the latent defender is missing: Specifcially, it is not clear a) how much does stage 2 (model refinement with clusters) contribute to the accuracy (how does the model do without it? And 3) how important is the HDDA and the specific variant used (which is not clear) important: is it important to model the Gaussians using a sub-space? Of which dimension?
Overall:
Pros:
- A nice idea with some novelty, based on a non-trivial observation
- The experimental results how the idea holds some promise
Cons
- The method is not presented clearly enough: the main component modeling the network activity is not explained (the HDDA module used)
- The results presented show that the method is probably not suitable for a practical application yet (high false alarm rate for good detection rate)
- Experimental results are partial: results are not presented for multiple defenders, no ablation experiments
After revision:
Some of my comments were addressed, and some were not.
Specifically, results were presented for multiple defenders and some ablation experiments were highlihgted
Things not addressed:
- The risk analysis is still not relevant. The authors removed a clearly flawed sentence, but the analysis still assumes that two densities (of 'good' and 'bad' examples) are modeled, while in the work presented only one of them is. Hence this analysis does not add anything to the paper- it states a general case which does not fit the current scenario and its relation to the work is not clear. It would have been better to omit it and use the space to describe HDDA and the specific variant used in this work, as this is the main tool doing the distinction.
I believe the paper should be accepted. |
iclr_2018_Sy-dQG-Rb | Published as a conference paper at ICLR 2018 NEURAL SPEED READING VIA SKIM-RNN
Inspired by the principles of speed reading, we introduce Skim-RNN, a recurrent neural network (RNN) that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. Skim-RNN gives computational advantage over an RNN that always updates the entire hidden state. Skim-RNN uses the same input and output interfaces as a standard RNN and can be easily used instead of RNNs in existing models. In our experiments, we show that Skim-RNN can achieve significantly reduced computational cost without losing accuracy compared to standard RNNs across five different natural language tasks. In addition, we demonstrate that the trade-off between accuracy and speed of Skim-RNN can be dynamically controlled during inference time in a stable manner. Our analysis also shows that Skim-RNN running on a single CPU offers lower latency compared to standard RNNs on GPUs. | Summary: The paper proposes a learnable skimming mechanism for RNN. The model decides whether to send the word to a larger heavy-weight RNN or a light-weight RNN. The heavy-weight and the light-weight RNN each controls a portion of the hidden state. The paper finds that with the proposed skimming method, they achieve a significant reduction in terms of FLOPS. Although it doesn’t contribute to much speedup on modern GPU hardware, there is a good speedup on CPU, and it is more power efficient.
Contribution:
- The paper proposes to use a small RNN to read unimportant text. Unlike (Yu et al., 2017), which skips the text, here the model decides between small and large RNN.
Pros:
- Models that dynamically decide the amount of computation make intuitive sense and are of general interests.
- The paper presents solid experimentation on various text classification and question answering datasets.
- The proposed method has shown reasonable reduction in FLOPS and CPU speedup with no significant accuracy degradation (increase in accuracy in some tasks).
- The paper is well written, and the presentation is good.
Cons:
- Each model component is not novel. The authors propose to use Gumbel softmax, but does compare other gradient estimators. It would be good to use REINFORCE to do a fair comparison with (Yu et al., 2017 ) to see the benefit of using small RNN.
- The authors report that training from scratch results in unstable skim rate, while Half pretrain seems to always work better than fully pretrained ones. This makes the success of training a bit adhoc, as one need to actively tune the number of pretraining steps.
- Although there is difference from (Yu et al., 2017), the contribution of this paper is still incremental.
Questions:
- Although it is out of the scope for this paper to achieve GPU level speedup, I am curious to know some numbers on GPU speedup.
- One recommended task would probably be text summarization, in which the attended text can contribute to the output of the summary.
Conclusion:
- Based on the comments above, I recommend Accept |
iclr_2018_HkPCrEZ0Z | Model-free deep reinforcement learning algorithms are able to successfully solve a wide range of continuous control tasks, but typically require many on-policy samples to achieve good performance. Model-based RL algorithms are sampleefficient on the other hand, while learning accurate global models of complex dynamic environments has turned out to be tricky in practice, which leads to the unsatisfactory performance of the learned policies. In this work, we combine the sample-efficiency of model-based algorithms and the accuracy of model-free algorithms. We leverage multi-step neural network based predictive models by embedding real trajectories into imaginary rollouts of the model, and use the imaginary cumulative rewards as control variates for model-free algorithms. In this way, we achieved the strengths of both sides and derived an estimator which is not only sample-efficient, but also unbiased and of very low variance. We present our evaluation on the MuJoCo and OpenAI Gym benchmarks. | This paper presents a model-based approach to variance reduction in policy gradient methods. The basic idea is to use a multi-step dynamics model as a "baseline" (more properly a control variate, as the terminology in the paper uses, but I think baselines are more familiar to the RL community) to reduce the variance of a policy gradient estimator, while remaining unbiased. The authors also discuss how to best learn the type of multi-step dynamics that are well-suited to this problem (essentially, using off-policy data via importance weighting), and they demonstrate the effectiveness of the approach on four continuous control tasks.
This paper presents a nice idea, and I'm sure that with some polish it will become a very nice conference submission. But right now (at least as of the version I'm reviewing), the paper reads as being half-finished. Several terms are introduced without being properly defined, and one of the key formalisms presented in the paper (the idea of "embedding" an "imaginary trajectory" remains completely opaque to me. Further, the paper seems to simply leave out some portions: the introduction claims that one of the contributions is "we show that techniques such as latent space trajectory embedding and dynamic unfolding can significantly boost the performance of the model based control variates," but I see literally no section that hints at anything like this (no mention of "dynamic unfolding" or "latent space trajectory embedding" ever occurs later in the paper).
In a bit more detail, the key idea of the paper, at least to the extent that I understood it, was that the authors are able to introduce a model-based variance-reduction baseline into the policy gradient term. But because (unlike traditional baselines) introducing it alone would affect the actual estimate, they actually just add and subtract this term, and separate out the two terms in the policy gradient: the new policy gradient like term will be much smaller, and the other term can be computed with less variance using model-based methods and the reparameterization trick. But beyond this, and despite fairly reasonable familiarity with the subject, I simply don't understand other elements that the paper is talking about.
The paper frequently refers to "embedding" "imaginary trajectories" into the dynamics model, and I still have no idea what this is actually referring to (the definition at the start of section 4 is completely opaque to me). I also don't really understand why something like this would be needed given the understanding above, but it's likely I'm just missing something here. But I also feel that in this case, it borders on being an issue with the paper itself, as I think this idea needs to be described much more clearly if it is central to the underlying paper.
Finally, although I do think the extent of the algorithm that I could follow is interesting, the second issue with the paper is that the results are fairly weak as they stand currently. The improvement over TRPO is quite minor in most of the evaluated domains (other than possibly in the swimmer task), even with substantial added complexity to the approach. And the experiments are described with very little detail or discussion about the experimental setup.
Nor are either of these issues simply due to space constraints: the paper is 2 pages under the soft ICLR limit, with no appendix. Not that there is anything wrong with short papers, but in this case both the clarity of presentation and details are lacking. My honest impression is simply that this is still work in progress and that the write up was done rather hastily. I think it will eventually become a good paper, but it is not ready yet. |
iclr_2018_ryZElGZ0Z | The ability of an agent to discover its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game to win, or driving safely. Several studies have demonstrated that learning extramural sub-tasks and auxiliary predictions can improve (1) single human-specified task learning, (2) transfer of learning, (3) and the agent's learned representation of the world. In all these examples, the agent was instructed what to learn about. We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent's representation of the world. Specifically, our system maintains a large collection of predictions, continually pruning and replacing predictions. We highlight the importance of considering stability rather than convergence for such a system, and develop an adaptive, regularized algorithm towards that aim. We provide several experiments in computational micro-worlds demonstrating that this simple approach can be effective for discovering useful predictions autonomously. | I really enjoyed reading this paper and stopped a few time to write down new ideas it brought up. Well written and very clear, but somewhat lacking in the experimental or theoretical results.
The formulation of AdaGain is very reminiscent of the SGA algorithm in Kushner & Yin (2003), and more generally gradient descent optimization of the learning rate is not new. The authors argue for the focus on stability over convergence, which is an interesting focus, but still I found the lack of connection with related work in this section a strange.
How would a simple RNN work for the experimental problems? The first experiment demonstrates that the regularization is using fewer features than without, which one could argue does not need to be compared with other methods to be useful. Especially when combined with Figure 5, I am convinced the regularization is doing a good job of pruning the least important GVFs. However, the results in Figure 3 have no context for us to judge the results within. Is this effective or terrible? Fast or slow? It is really hard to judge from these results. We can say that more GVFs are better, and that the compositional GVFs add to the ability to lower RMSE. But I do not think this is enough to really judge the method beyond a preliminary "looks promising".
The compositional GVFs also left me wondering: What keeps a GVF from being pruned that is depended upon by a compositional GVF? This was not obvious to me.
Also, I think comparing GVFs and AdaGain-R with an RNN approach highlights the more general question. Is it generally true that GVFs setup like this can learn to represent any value function that an RNN could have? There's an obvious benefit to this approach which is that you do not need BPTT, fantastic, but why not highlight this? The network being used is essentially a recurrent neural net, the authors restrict it and train it, not with backprop, but with TD, which is very interesting. But, I think there is not quite enough here.
Pros:
Well written, very interesting approach and ideas
Conceptually simple, should be easy to reproduce results
Cons:
AdaGain never gets analyzed or evaluated except for the evaluations of AdaGain-R.
No experimental context, we need a non-trivial baseline to compare with |
iclr_2018_BkrSv0lA- | LOSS-AWARE WEIGHT QUANTIZATION OF DEEP NET- WORKS
The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network. | This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights).
The paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose. The experiments are very clearly presented and solidly designed.
In fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides. Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods.
Finally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm. However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ($b_j^t \in Q^{n_l}$). This may not be relevant for the practical results, but to be accurate, it can't be simply stated that the algorithm converges, without a more careful analysis. |
iclr_2018_B1Gi6LeRZ | Published as a conference paper at ICLR 2018 LEARNING FROM BETWEEN-CLASS EXAMPLES FOR DEEP SOUND RECOGNITION
Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher's criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level 1 . | This manuscript proposes a method to improve the performance of a generic learning method by generating "in between class" (BC) training samples. The manuscript motivates the necessity of such technique and presents the basic intuition. The authors show how the so-called BC learning helps training different deep architectures for the sound recognition task.
My first remark regards the presentation of the technique. The authors argue that it is not a data augmentation technique, but rather a learning method. I strongly disagree with this statement, not only because the technique deals exactly with augmenting data, but also because it can be used in combination to any learning method (including non-deep learning methodologies). Naturally, the literature review deals with data augmentation technique, which supports my point of view.
In this regard, I would have expected comparison with other state-of-the-art data augmentation techniques. The usefulness of the BC technique is proven to a certain extent (see paragraph below) but there is not comparison with state-of-the-art. In other words, the authors do not compare the proposed method with other methods doing data augmentation. This is crucial to understand the advantages of the BC technique.
There is a more fundamental question for which I was not able to find an explicit answer in the manuscript. Intuitively, the diagram shown in Figure 4 works well for 3 classes in dimension 2. If we add another class, no matter how do we define the borders, there will be one pair of classes for which the transition from one to another will pass through the region of a third class. The situation worsens with more classes. However, this can be solved by adding one dimension, 4 classes and 3 dimensions seems something feasible. One can easily understand that if there is one more class than the number of dimensions, the assumption should be feasible, but beyond it starts to get problematic. This discussion does not appear at all in the manuscript and it would be an important limitation of the method, specially when dealing with large-scale data sets.
Overall I believe the paper is not mature enough for publication.
Some minor comments:
- 2.1: We introduce --> We discussion
- Pieczak 2015a did not propose the extraction of MFCC.
- the x_i and t_i of section 3.2.2 should not be denoted with the same letters as in 3.2.1.
- The correspondence with a semantic feature space is too pretentious, specially since no experiment in this direction is shown.
- I understand that there is no mixing in the test phase, perhaps it would be useful to recall it. |
iclr_2018_BJypUGZ0Z | Workshop track -ICLR 2018 ACCELERATING NEURAL ARCHITECTURE SEARCH US- ING PERFORMANCE PREDICTION
Methods for neural network hyperparameter optimization and architecture search are computationally expensive due to the need to train a large number of model configurations. In this paper, we show that simple regression models can predict the final performance of partially trained model configurations using features based on network architectures, hyperparameters, and time-series validation performance data. We empirically show that our performance prediction models are much more accurate than prominent Bayesian counterparts, are simpler to implement, and are faster to train. Our models can predict final performance in both visual classification and language modeling domains, are effective for predicting performance of drastically varying model architectures, and can even generalize between model classes. Using these prediction models, we also implement an early stopping method for hyperparameter optimization and architecture search, which obtains a speedup of a factor up to 6x in both hyperparameter optimization and architecture search. Finally, we empirically show that our early stopping method can be seamlessly incorporated into both reinforcement learning-based architecture selection algorithms and bandit based search methods. Through extensive experimentation, we empirically show our performance prediction models and early stopping algorithm are state-of-the-art in terms of prediction accuracy and speedup achieved while still identifying the optimal model configurations. | This paper shows a simple method for predicting the performance that neural networks will achieve with a given architecture, hyperparameters, and based on an initial part of the learning curve.
The method assumes that it is possible to first execute 100 evaluations up to the total number of epochs.
From these 100 evaluations (with different hyperparameters / architectures), the final performance y_T is collected. Then, based on an arbitrary prefix of epochs y_{1:t}, a model can be learned to predict y_T.
There are T different models, one for each prefix y_{1:t} of length t. The type of model used is counterintuitive for me; why use a SVR model? Especially since uncertainty estimates are required, a Gaussian process would be the obvious choice.
The predictions in Section 3 appear to be very good, and it is nice to see the ablation study.
Section 4 fails to mention that its use of performance prediction for early stopping follows exactly that of Domhan et al (2015) and that this is not a contribution of this paper; this feels a bit disingenious and should be fixed.
The section should also emphasize that the models discussed in this paper are only applicable for early stopping in cases where the function evaluation budget N is much larger than 100.
The emphasis on the computational demand of 1-3 minutes for LCE seems like a red herring: MetaQNN trained 2700 networks in 100 GPU days, i.e., about 1 network per GPU hour. It trained 20 epochs for the studied case of CIFAR, so 1-3 minutes per epoch on the CPU can be implemented with zero overhead while the network is training on the GPU. Therefore, the following sentence seems sensational without substance: "Therefore, on a full meta-modeling experiment involving thousands of neural network configurations, our method could be faster by several orders of magnitude as compared to LCE based on current implementations."
The experiment on fast Hyperband is very nice at first glance, but the longer I think about it the more questions I have. During the rebuttal I would ask the authors to extend f-Hyperband all the way to the right in Figure 6 (left) and particularly in Figure 6 (right). Especially in Figure 6 (right), the original Hyperband algorithm ends up higher than f-Hyperband. The question this leaves open is whether f-Hyperband would reach the same performance when continued or not.
I would also request the paper not to casually mention the 7x speedup that can be found in the appendix, without quantifying this. This is only possible for a large number of 40 Hyperband iterations, and in the interesting cases of the first few iterations speedups are very small. Also, do the simulated speedup results in the appendix account for potentially stopping a new best configuration, or do they simply count how much computational time is saved, without looking at performance? The latter would of course be extremely misleading and should be fixed. I am looking forward to a clarification in the rebuttal period.
For relating properly to the literatue, the experiment for speeding up Hyperband should also mention previous methods for speeding up Hyperband by a model (I only know one by the authors' reference Klein et al (2017)).
Overall, this paper appears very interesting. The proposed technique has some limitations, but in some settings it seems very useful. I am looking forward to the reply to my questions above; my final score will depend on these.
Typos / Details:
- The range of the coefficient of determination is from 0 to 1. Table 1 probably reports 100 * R^2? Please fix the description.
- I did not see Table 1 referenced in the text.
- Page 3: "more computationally and" -> "more computationally efficient and"
- Page 3: "for performing final" -> "for predicting final"
Points in favor of the paper:
- Simple method
- Good prediction results
- Useful possible applications identified
Points against the paper:
- Methodological advances are limited / unmotivated choice of model
- Limited applicability to settings where >> 100 configurations can be run fully
- Possibly inflated results reported for Hyperband experiment |
iclr_2018_BJgPCveAW | We propose a novel way of reducing the number of parameters in the storagehungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training. Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density. We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers. Based on our sparsifying technique, we introduce the 'scatter' metric to characterize the quality of a particular connection pattern. As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns. | This paper examines sparse connection patterns in upper layers of convolutional image classification networks. Networks with very few connections in the upper layers are experimentally determined to perform almost as well as those with full connection masks. Heuristics for distributing connections among windows/groups and a measure called "scatter" are introduced to construct the connectivity masks, and evaluated experimentally on CIFAR-10 and -100, MNIST and Morse code symbols.
While it seems clear in general that many of the connections are not needed and can be made sparse (Figures 1 and 2), I found many parts of this paper fairly confusing, both in how it achieves its objectives, as well as much of the notation and method descriptions. I've described many of the points I was confused by in more detailed comments below.
Detailed comments and questions:
The distribution of connections in "windows" are first described to correspond to a sort of semi-random spatial downsampling, to get different views distributed over the full image. But in the upper layers, the spatial extent can be very small compared to the image size, sometimes even 1x1 depending on the network downsampling structure. So are do the "windows" correspond to spatial windows, and if so, how? Or are they different (maybe arbitrary) groupings over the feature maps?
Also a bit confusing is the notation "conv2", "conv3", etc. These names usually indicate the name of a single layer within the network (conv2 for the second convolutional layer or series of layers in the second spatial size after downsampling, for example). But here it seems just to indicate the number of "CL" layers: 2. And p.1 says that the "CL" layers are those often referred to as "FC" layers, not "conv" (though they may be convolutionally applied with spatial 1x1 kernels).
The heuristic for spacing connections in windows across the spatial extent of an image makes intuitive sense, but I'm not convinced this will work well in all situations, and may even be sub-optimal for the examined datasets. For example, to distinguish MNIST 1 vs 7 vs 9, it is most important to see the top-left: whether it is empty, has a horizontal line, or a loop. So some regions are more important than others, and the top half may be more important than an equally spaced global view. So the description of how to space connections between windows makes some intuitive sense, but I'm unclear on whether other more general connections might be even better, including some that might not be as easily analyzed with the "scatter" metric described.
Another broader question I have is in the distinction between lower and upper layers (those referred to as "feature extracting" and "classification" in this paper). It's not clear to me that there is a crisply defined difference here (though some layers may tend to do more of one or the other function, such as we might interpret). So it seems that expanding the investigation to include all layers, or at least more layers, would be good: It might be that more of the "classification" function is pushed down to lower layers, as the upper layers are reduced in size. How would they respond to similar reductions?
I'm also unsure why on p.6 MNIST uses 2d windows, while CIFAR uses 3d --- The paper mentions the extra dimension is for features, but MNIST would have a features dimension as well at this stage, I think? I'm also unsure whether the windows are over spatial extent only, or over features. |
iclr_2018_Hk6WhagRW | Published as a conference paper at ICLR 2018 EMERGENT COMMUNICATION THROUGH NEGOTIATION
Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protocols -one grounded in the semantics of the game, and one which is a priori ungrounded and is a form of cheap talk. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded channel. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation. | The authors describe a variant of the negotiation game in which agents of different type, selfish or prosocial, and with different preferences. The central feature is the consideration of a secondary communication (linguistic) channel for the purpose of cheap talk, i.e. talk whose semantics are not laid out a priori.
The essential findings include that prosociality is a prerequisite for effective communication (i.e. formation of meaningful communication on the linguistic channel), and furthermore, that the secondary channel helps improve the negotiation outcomes.
The paper is well-structured and incrementally introduces the added features and includes staged evaluations for the individual additions, starting with the differentiation of agent characteristics, explored with combination of linguistic and proposal channel. Finally, agent societies are represented by injecting individuals' ID into the input representation.
The positive:
- The authors attack the challenging task of given agents a means to develop communication patterns without apriori knowledge.
- The paper presents the problem in a well-structured manner and sufficient clarity to retrace the essential contribution (minor points for improvement).
- The quality of the text is very high and error-free.
- The background and results are well-contextualised with relevant related work.
The problematic:
- By the very nature of the employed learning mechanisms, the provided solution provides little insight into what the emerging communication is really about. In my view, the lack of interpretable semantics hardly warrants a reference to 'cheap talk'. As such the expectations set by the well-developed introduction and background sections are moderated over the course of the paper.
- The goal of providing agents with richer communicative ability without providing prior grounding is challenging, since agents need to learn about communication partners at runtime. But it appears as of the main contribution of the paper can be reduced to the decomposition of the learnable feature space into two communication channels. The implicit relationship of linguistic channel on proposal channel input based on the time information (Page 4, top) provides agents with extended inputs, thus enabling a more nuanced learning based on the relationship of proposal and linguistic channel. As such the well-defined semantics of the proposal channel effectively act as the grounding for the linguistic channel. This, then, could have been equally achieved by providing agents with a richer input structure mediated by a single channel. From this perspective, the solution offers limited surprises. The improvement of accuracy in the context of agent societies based on provided ID follows the same pattern of extending the input features.
- One of the motivating factors of using cheap talk is the exploitation of lying on the part of the agents. However, apart from this initial statement, this feature is not explicitly picked up. In combination with the previous point, the necessity/value of the additional communication channel is unclear.
Concrete suggestions for improvement:
- Providing exemplified communication traces would help the reader appreciate the complexity of the problem addressed by the paper.
- Figure 3 is really hard to read/interpret. The same applies to Figure 4 (although less critical in this case).
- Input parameters could have been made explicit in order to facilitate a more comprehensive understanding of technicalities (e.g. in appendix).
- Emergent communication is effectively unidirectional, with one agent as listener. Have you observed other outcomes in your evaluation?
In summary, the paper presents an interesting approach to combine unsupervised learning with multiple communication channels to improve learning of preferences in a well-established negotiation game. The problem is addressed systematically and well-presented, but can leave the reader with the impression that the secondary channel, apart from decomposing the model, does not provide conceptual benefit over introducing a richer feature space that can be exploited by the learning mechanisms. Combined with the lack of specific cheap talk features, the use of actual cheap talk is rather abstract. Those aspects warrant justification. |
iclr_2018_B1X0mzZCW | Published as a conference paper at ICLR 2018 FIDELITY-WEIGHTED LEARNING
Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental qualityversus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised studentteacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations. | The problem of interest is to train deep neural network models with few labelled training samples. The specific assumption is there is a large pool of unlabelled data, and a heuristic function that can provide label annotations, possibly with varying levels of noises, to those unlabelled data. The adopted learning model is of a student/teacher framework as in privileged learning/knowledge distillation/model compression, and also machine teaching. The student (deep neural network) model will learn from both labelled and unlabelled training data with the labels provided by the teacher (Gaussian process) model. The teacher also supplies an uncertainty estimate to each predicted label. How about the heuristic function? This is used for learning initial feature representation of the student model. Crucially, the teacher model will also rely on these learned features. Labelled data and unlabelled data are therefore lie in the same dimensional space.
Specific questions to be addressed:
1) Clustering of strongly-labelled data points. Thinking about the statement “each an expert on this specific region of data space”, if this is the case, I am expecting a clustering for both strongly-labelled data points and weakly-labelled data points. Each teacher model is trained on a portion of strongly-labelled data, and will only predict similar weakly-labelled data. On a related remark, the nice side-effect is not right as it was emphasized that data points with a high-quality label will be limited. As well, GP models, are quite scalable nowadays (experiments with millions to billions of data points are available in recent NIPS/ICML papers, though, they are all rely on low dimensionality of the feature space for optimizing the inducing point locations). It will be informative to provide results with a single GP model.
2) From modifying learning rates to weighting samples. Rather than using uncertainty in label annotation as a multiplicative factor in the learning rate, it is more “intuitive” to use it to modify the sampling procedure of mini-batches (akin to baseline #4); sample with higher probability data points with higher certainty. Here, experimental comparison with, for example, an SVM model that takes into account instance weighting will be informative, and a student model trained with logits (as in knowledge distillation/model compression). |
iclr_2018_S1Y7OOlRZ | Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs. For such models, we cannot afford to train candidate models sequentially and wait months before finding a suitable hyperparameter configuration. Hence, we introduce the large-scale regime for parallel hyperparameter tuning, where we need to evaluate orders of magnitude more configurations than available parallel workers in a small multiple of the wall-clock time needed to train a single model. We propose a novel hyperparameter tuning algorithm for this setting that exploits both parallelism and aggressive earlystopping techniques, building on the insights of the Hyperband algorithm (Li et al., 2016). Finally, we conduct a thorough empirical study of our algorithm on several benchmarks, including large-scale experiments with up to 500 workers. Our results show that our proposed algorithm finds good hyperparameter settings nearly an order of magnitude faster than random search. | This paper introduces a simple extension to parallelize Hyperband.
Points in favor of the paper:
* Addresses an important problem
Points against:
* Only 5-fold speedup by parallelization with 5 x 25 workers, and worse performance in the same budget than Google Vizier (even though that treats the problem as a black box)
* Limited methodological contribution/novelty
The paper's methodological contribution is quite limited: it amounts to a straight-forward parallelization of successive halving (SHA). Specifically, whenever a worker frees up, do a new run on it, at the highest rung possible while making sure to not run too many runs for too high rungs. (I am pretty sure that is the idea, even though Algorithm 1, which is supposed to give the details, appears to have a bug in Procedure get_job -- it would always either pick the highest rung or the lowest!)
Empirically, the paper strangely does not actually evaluate a parallel version of Hyperband, but only evaluates the 5 parallel variants of SHA that Hyperband would run, each of them with all workers. The experiments in Section 4.2 show that, using 25 workers, the best of these 5 variants obtains a 5-fold speedup over sequential Hyperband on CIFAR and an 8-fold speedup on SVHN. I am confused: the *best* of 5 SHA variants only achieves a 5-fold speedup using 25 workers? I.e., parallel Hyperband, which would run the 5 SHA variants in parallel, would require 125 workers but only yield a 5-fold speedup? If I understand this correctly, I would clearly call this a negative result.
Likewise, for the large-scale experiment, a single run of Vizier actually yields as good performance as the best of the 5 SHA variants, and it is unknown beforehand which SHA variant works best -- in this example, actually Bracket 0 (which is often the best) stagnates. Parallel Hyperband would run the 5 SHA variants in parallel, so its performance at a budget of 10R with a total of 500 workers can be evaluated by taking the minimum of the 5 SHA variants at a budget of 2R. This would obtain a perplexity of above 90, which is quite a bit worse than Vizier's result of about 82. In general, the performance of parallel Hyperband can be computed by taking the minimum of the SHA variants and multiplying the time taken by 5; this shows that at any time in the plot (Figure 3, left) Vizier dominates parallel Hyperband. Again, this is apparently a negative result. (For Figure 3, right, no results for Vizier are given yet.)
If I understand correctly, the experiment in Section 4.4 does not involve any run of Hyperband, but merely plots predictions of Qi et al.'s Paelo framework of how many models could be evaluated with a growing number of GPUs.
Therefore, all empirical results for parallel Hyperband reported in the paper appear to be negative. This confuses me, especially since the authors seem to take them as positive results.
Because the original Hyperband paper argued that Bayesian optimization does not parallelize as well as random search / Hyperband, and because Hyperband has been reported to work much better than Bayesian optimization on a single node, I would have expected clear improvements of parallel Hyperband over parallel Bayesian optimization (=Vizier in the authors' setup). However, this is not what I see in the results. Am I mistaken somewhere? If not, based on these negative results the paper does not seem to quite clear the bar for ICLR.
Details, in order of appearance in the paper:
- Vizier: why did the authors only use Vizier's default Bayesian optimization algorithm? The Vizier paper by Golovin et al (2017) states that for large budgets other optimizers often perform better, and the budget in the large scale experiments is as high as 5000 function evaluations. Also, isn't there an automatic choice built into Vizier to pick the optimizer expected to be best? I think using a suboptimal version of Vizier would be a problem for the experimental setup.
- Algorithm 1: this needs some improvement; in particular fixing the bug I mentioned above.
- Section 3.1: Li et al (2017) do not analyze any algorithm theoretically. They also do not discuss finite vs. infinite horizon. I believe the authors meant Li et al's arXiv paper (2016) in both of these cases.
- Section 3.1, point 2: this is unclear to me, even though I know Hyperband very well. Can you please make this clearer?
- "A complete theoretical treatment of asynchronous SHA is out of the scope of this paper" -> is some theoretical treatment in scope?
- Section 4.1: It seems very useful to already recommend configurations in each rung of Hyperband, and I am surprised that the methods section does not mention this. From the text in this experiments section, it feels a little like that was always part of Hyperband; I didn't think it was, so I checked the original papers and blog posts, and both the ICLR 2017 and the arXiv 2016 paper state "In fact, the first result returned by HYPERBAND after using a budget of 5R is often competitive with results returned by other searchers after using 50R." and Kevin Jamieson's blog post on Hyperband (https://people.eecs.berkeley.edu/~kjamieson/hyperband.html) explicitly states: "While random and the Bayesian Optimization algorithms output their first recommendation after max_iter iterations, Hyperband does not output anything until about max_iter(logeta(max_iter)+1) iterations [...]"
Therefore, recommending after each rung seems to be a contribution of this paper, and I think it would be nice to read about this in the methods section.
- Experiment 1 (SVM) used dataset size as a budget, which is what Fabolas ("Fast Bayesian optimization on large datasets") is designed for according to Klein et al (2017). On the other hand, Experiments (2) and (3) used the number of epochs as a budget, and Fabolas is not designed for that (one would want to use a different kernel, for epochs, e.g., like Freeze-Thaw Bayesian optimization (FTBO) by Swersky et al (2014), instead of a kernel made for dataset sizes). Therefore, it is not surprising that Fabolas does not work as well in those cases. The case of number of epochs as a budget would be the domain of FTBO. I know that there is no reference implementation of FTBO, so I am not asking for a comparison, but the comparison against Fabolas is misleading for Experiments (2) and (3). This doesn't really change anything for the paper: the authors could still make the case that Fabolas hasn't been designed for this case and that (to the best of my knowledge) there simply isn't an implementation of a BO algorithm that is. Fabolas is arguably the closest thing, so the results could still be reported, just not as an apples-to-apples comparison; probably best as "Fabolas-like, with dataset size kernel" in the figure. The justification to not compare against Fabolas in the parallel regime is clearly valid.
- A clarification question: Section 4.4 does not report on any runs of actual neural networks, does it? And not on any runs of Hyperband, correct? Do I understand the reasoning correctly as pointing out that standard parallelization across multiple GPUs is not great, and that thus, in combination with parallel Hyperband, runs should be done mostly on one GPU only? How does this relate to the results in the cited paper "Accurate, Large-batch SGD: Training ImageNet in 1 Hour" (https://arxiv.org/abs/1706.02677)? Quoting from its abstract: "Using commodity hardware, our implementation achieves ∼ 90% scaling efficiency when moving from 8 to 256 GPUs." That seems like a very good utilization of parallel computing power?
- There is no conclusion / future work.
----------
Edit after author rebuttal:
I thank the reviewers for their rebuttal. This cleared up some points, but some others are still open.
(1) and (2) Unfortunately, I still do not agree that the need for 5*25 workers to get a 5-fold to 8-fold speedup is a positive result. Similarly, I would interpret the results in Figure 3 differently than the authors. For the comparison against Vizier the authors argue that they could just take the lowest 2 brackets of Hyperband; but running both of these two would still be 2x slower than Vizier. And we can't only run the best bracket because the information which one is the best is not available ahead of time. In fact, it is the entire point of Hyperband to hedge across multiple brackets including the one that is random search; one *could* just use the smallest bracket, but that is a heuristic and has no theoretical guarantees of being better (or at least not worse by more than a bounded factor) than random search.
Orthogonally: the comparison to Vizier (or any other baseline) is still missing for the LSTM acoustic model.
(3) Concerning SOTA results, I have to agree with AnonReviewer3: one way to demonstrate success is to show competitive performance on a dataset (e.g., CIFAR) on which other researchers can also evaluate their algorithms on. Getting 17% on CIFAR-10 does not fall into that category. Nevertheless, I agree with the authors that another way to demonstrate success is to show competitive performance on a *combination* of a dataset and a design space, but for that to be something that other researchers can compare to requires the authors making publicly available the implementations they have optimized; without that public availability, due to a host of possible confounding factors, it is impossible to judge whether state-of-the-art performance on such a combination of dataset and design space has been achieved. I therefore recommend that the authors make the entire code they used for training CIFAR available; I don't expect this to have anything new in there, but it's a useful benchmark.
Likewise, for the LSTM on PTB, DeepMind used Google Vizier (https://arxiv.org/abs/1707.05589) to achieve *perplexities below 60* (compared to the results above 80 reported by the authors). Just as above, I therefore recommend that the authors make their pipeline for LSTB on PTB available. Likewise for the LSTM acoustic model.
(4) I'm confused that Section 4.4 does relate to SHA/Hyperband. Of course, there are some diminishing returns of running an optimizer across multiple GPUs. But similarly, there are diminishing returns of parallelizing SHA (e.g., the 5-fold speedup on 125 workers above). So the natural question that would be nice to answer is which combination of the two will yield the best results. Relatedly, the paper by Goyal et al seems to show that the weak scaling regime leads to almost linear speedups; why do the authors then analyze the strong scaling regime that does not appear to work as well?
Overall, the rebuttal did not change my evaluation and I kept my original score. |
iclr_2018_rJwelMbR- | DIVIDE-AND-CONQUER REINFORCEMENT LEARNING
Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/. | This paper presents a method for learning a global policy over multiple different MDPs (referred to as different "contexts", each MDP having the same dynamics and reward, but different initial state). The basic idea is to learn a separate policy for each context, but regularized in a manner that keeps all of them relatively close to each other, and then learn a single centralized policy that merges the multiple policies via supervised learning. The method is evaluated on several continuous state and action control tasks, and shows improvement over existing and similar approaches, notably the Distral algorithm.
I believe there are some interesting ideas presented in this paper, but in its current form I think that the delta over past work (particularly Distral) is ultimately too small to warrant publication at ICLR. The authors should correct me if I'm wrong, but it seems as though the algorithm presented here is virtually identical to Distral except that:
1) The KL divergence term regularizes all policies together in a pairwise manner.
2) The distillation step happens episodically every R steps rather than in a pure SGD manner.
3) The authors possibly use a TRPO type objective for the standard policy gradient term, rather than REINFORCE-like approach as in Distral (this one point wasn't completely clear, as the authors mention that a "centralized DnC" is equivalent to Distral, so they may already be adapting it to the TRPO objective? some clarity on this point would be helpful).
Thus, despite better performance of the method over Distral, this doesn't necessarily seem like a substantially new algorithmic development. And given how sensitive RL tasks are to hyperparameter selection, there needs to be some very substantial treatment of how the regularization parameters are chosen here (both for DnC and for the Distral and centralized DnC variants). Otherwise, it honestly seems that the differences between the competing methods could be artifacts of the choice of regularization (the alpha parameter will affect just how tightly coupled the control policies actually are).
In addition to this point, the formulation of the problem setting in many cases was also somewhat unclear. In particular, the notion of the contextual MDP is not very clear from the presentation. The authors define a contextual MDP setting where in addition to the initial state there is an observed context to the MDP that can affect the initial state distribution (but not the transitions or reward). It's entirely unclear to me why this additional formulation is needed, and ultimately just seems to confuse the nature of the tasks here which is much more clearly presented just as transfer learning between identical MDPs with different state distributions; and the terminology also conflicts with the (much more complex) setting of contextual decision processes (see: https://arxiv.org/abs/1610.09512). It doesn't seem, for instance, that the final policy is context dependent (rather, it has to "infer" the context from whatever the initial state is, so effectively doesn't take the context into account at all). Part of the reasoning seems to be to make the work seem more distinct from Distral than it really is, but I don't see why "transfer learning" and the presented contextual MDP are really all that different.
Finally, the experimental results need to be described in substantially more detail. The choice of regularization parameters, the precise nature of the context in each setting, and the precise design of the experiments is all extremely opaque in the current presentation. Since the methodology here is so similar to previous approaches, much more emphasis is required to better understand the (improved) empirical results in this eating.
In summary, while I do think the core ideas of this paper are interesting: whether it's better to regularize policies to a single central policy as in Distral or whether it's better to use joint regularization, whether we need two different timescales for distillation versus policy training, and what policy optimization method works best, as it is right now the algorithmic choices in the paper seem rather ad-hoc compared to Distral, and need substantially more empirical evidence.
Minor comments:
• There are several missing words/grammatical errors throughout the manuscript, e.g. on page 2 "gradient information can better estimated". |
iclr_2018_rkcya1ZAW | Two fundamental problems in unsupervised learning are efficient inference for latent-variable models and robust density estimation based on large amounts of unlabeled data. For efficient inference, normalizing flows have been recently developed to approximate a target distribution arbitrarily well. In practice, however, normalizing flows only consist of a finite number of deterministic transformations, and thus they possess no guarantee on the approximation accuracy. For density estimation, the generative adversarial network (GAN) has been advanced as an appealing model, due to its often excellent performance in generating samples. In this paper, we propose the concept of continuous-time flows (CTFs), a family of diffusion-based methods that are able to asymptotically approach a target distribution. Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees. Our framework includes distilling knowledge from a CTF for efficient inference, and learning an explicit energy-based distribution with CTFs for density estimation.
Experiments on various tasks demonstrate promising performance of the proposed CTF framework, compared to related techniques. | The authors propose the use of first order Langevin dynamics as a way to transition from one latent variable to the next in the VAE setting, as opposed to the deterministic transitions of normalizing flow. The extremely popular Fokker-Planck equation is used to analyze the steady state distributions in this setting. The authors also propose the use of CTF in density estimation, as a generator of samples from the ''true'' distribution, and show competitive performance w.r.t. inception score for some common datasets.
The use of Langevin diffusion for latent transitions is a good idea in my opinion; though quite simple, it has the benefit of being straightforward to analyze with existing machinery. Though the discretized Langevin transitions in \S 3.1 are known and widely used, I liked the motivation afforded by Lemma 2.
I am not convinced that taking \rho to be the sample distribution with equal probabilities at the z samples is a good choice in \S 3.1; it would be better to incorporate the proximity of the langevin chain to a stationary point in the atom weights instead of setting them to 1/K. However to their credit the authors do provide an estimate of the error in the distribution stemming from their choice.
To the best of my knowledge the use of CTF in density estimation as described in \S 4 is new, and should be of interest to the community; though again it is fairly straightforward. Regarding the experiments, the difference in ELBO between the macVAE and the vanilla ones with normalizing flows is only about 2%; I wish the authors included a discussion on how the parameters of the discretized Langevin chain affects this, if at all.
Overall I think the theory is properly described and has a couple of interesting formulations, in spite of being not particularly novel. I think CTFs like the one described here will see increased usage in the VAE setting, and thus the paper will be of interest to the community. |
iclr_2018_S1CChZ-CZ | Published as a conference paper at ICLR 2018 ASK THE RIGHT QUESTIONS: ACTIVE QUESTION REFORMULATION WITH REINFORCEMENT LEARNING
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a stateof-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming. | This paper formulates the Jeopardy QA as a query reformulation task that leverages a search engine. In particular, a user will try a sequence of alternative queries based on the original question in order to find the answer. The RL formulation essentially tries to mimic this process. Although this is an interesting formulation, as promoted by some recent work, this paper does not provide compelling reasons why it's a good formulation. The lack of serious comparisons to baseline methods makes it hard to judge the value of this work.
Detailed comments/questions:
1. I am actually quite confused on why it's a good RL setting. For a human user, having a series of queries to search for the right answer is a natural process, but it's not natural for a computer program. For instance, each query can be viewed as different formulation of the same question and can be issued concurrently. Although formulated as an RL problem, it is not clear to me whether the search result after each episode has been used as the immediate environment feedback. As a result, the dependency between actions seems rather weak.
2. I also feel that the comparisons to other baselines (not just the variation of the proposed system) are not entirely fair. For instance, the baseline BiDAF model has only one shot, namely using the original question as query. In this case, AQA should be allowed to use the same budget -- only one query. Another more realistic baseline is to follow the existing work on query formulation in the IR community. For example, 20 shorter queries generated by methods like [1] can be used to compare the queries created by AQA.
[1] Kumaran & Carvalho. "Reducing Long Queries Using Query Quality Predictors". SIGIR-09
Pros:
1. An interesting RL formulation for query reformulation
Cons:
1. The use of RL is not properly justified
2. The empirical result is not convincing that the proposed method is indeed advantageous
---------------------------------------
After reading the author response and checking the revised paper, I'm both delighted and surprised that the authors improved the submission substantially and presented stronger results. I believe the updated version has reached the bar and recommend accepting this paper. |
iclr_2018_Hk6kPgZA- | Published as a conference paper at ICLR 2018 CERTIFYING SOME DISTRIBUTIONAL ROBUSTNESS WITH PRINCIPLED ADVERSARIAL TRAINING
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches. | This paper proposes a principled methodology to induce distributional robustness in trained neural nets with the purpose of mitigating the impact of adversarial examples. The idea is to train the model to perform well not only with respect to the unknown population distribution, but to perform well on the worst-case distribution in some ball around the population distribution. In particular, the authors adopt the Wasserstein distance to define the ambiguity sets. This allows them to use strong duality results from the literature on distributionally robust optimization and express the empirical minimax problem as a regularized ERM with a different cost. The theoretical results in the paper are supported by experiments.
Overall, this is a very well-written paper that creatively combines a number of interesting ideas to address an important problem. |
iclr_2018_HJ39YKiTb | In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation. | The paper proposes to augment (traditional) text-based sentence generation/dialogue approaches by incorporating visual information. The idea is that associating visual information with input text, and using that associated visual information as additional input will produce better output text than using only the original input text.
The basic idea is to collect a bunch of data consisting of both text and associated images or video. Here, this was done using Japanese news programs. The text+image/video is used to train a model that requires both as input and that encodes both as context vectors, which are then combined and decoded into output text. Next, the image inputs are eliminated, with the encoded image context vector being instead associatively predicted directly from the encoded text context vector (why not also use the input text to help predict the visual context?), which is still obtained from the text input, as before. The result is a model that can make use of the text-visual associations without needing visual stimuli. This is a nice idea.
Actually, based on the brief discussion in Section 2.2.2, it occurs to me that the model might not really be learning visual context vectors associatively, or, that this doesn't really have meaning in some sense. Does it make sense to say that what it is really doing is just learning to associate other concepts/words with the input text, and that it is using the augmenting visual information in the training data to provide those associations? Is this worth talking about?
Unfortunately, while the idea has merit, and I'd like to see it pursued, the paper suffers from a fatal lack of validation/evaluation, which is very curious, given the amount of data that was collected, the fact that the authors have both a training and a test set, and that there are several natural ways such an evaluation might be performed. The two examples of Fig 3 and the additional four examples in the appendix are nice for demonstrating some specific successes or weaknesses of the model, but they are in no way sufficient for evaluation of the system, to demonstrate its accuracy or value in general.
Perhaps the most obvious thing that should be done is to report the model's accuracy for reproducing the news dialogue, that is, how accurately is the next sentence predicted by the baseline and ACM models over the training instances and over the test data? How does this compare with other state-of-the-art models for dialogue generation trained on this data (perhaps trained only on the textual part of the data in some cases)?
Second, some measure of accuracy for recall of the associative image context vector should be reported; for example, on average, how close (cosine similarity or some other appropriate measure) is the associatively recalled image context vector to the target image context vector? On average? Best case? Worst case? How often is this associative vector closer to a confounding image vector than an appropriate one?
A third natural kind of validation would be some form of study employing human subjects to test it's quality as a generator of dialogue.
One thing to note, the example of learning to associate the snowy image with the text about university entrance exams demonstrates that the model is memorizing rather than generalizing. In general, this is a false association (that is, in general, there is no reason that snow should be associated with exams on the 14th and 15th—the month is not mentioned, which might justify such an association.)
Another thought: did you try not retraining the decoder and attention mechanisms for step 3? In theory, if step 2 is successful, the retraining should not be necessary. To the extent that it is necessary, step 2 has failed to accurately predict visual context from text. This seems like an interesting avenue to explore (and is obviously related to the second type of validation suggested above). Also, in addition to the baseline model, it seems like it would be good to compare a model that uses actual visual input and the model of step 1 against the model of step 3 (possibly bot retrained and not retrained) to see the effect on the outputs generated—how well do each of these do at predicting the next sentence on both training and test sets?
Other concerns:
1. The paper is too long by almost a page in main content.
2. The paper exhibits significant English grammar and usage issues and should be carefully proofed by a native speaker.
3. There are lots of undefined variables in the Eqs. (s, W_s, W_c, b_s, e_t,i, etc.) Given the context and associated discussion, it is almost possible to sort out what all of them mean, but brief careful definitions should be given for clarity.
4. Using news broadcasts as a substitute for true dialogue data seems kind of problematic, though I see why it was done. |
iclr_2018_HJPSN3gRW | In this work, we focus on the problem of grounding language by training an agent to follow a set of natural language instructions and navigate to a target object in a 2D grid environment. The agent receives visual information through raw pixels and a natural language instruction telling what task needs to be achieved. Other than these two sources of information, our model does not have any prior information of both the visual and textual modalities and is end-to-end trainable. We develop an attention mechanism for multi-modal fusion of visual and textual modalities that allows the agent to learn to complete the navigation tasks and also achieve language grounding. Our experimental results show that our attention mechanism outperforms the existing multi-modal fusion mechanisms proposed in order to solve the above mentioned navigation task. We demonstrate through the visualization of attention weights that our model learns to correlate attributes of the object referred in the instruction with visual representations and also show that the learnt textual representations are semantically meaningful as they follow vector arithmetic and are also consistent enough to induce translation between instructions in different natural languages. We also show that our model generalizes effectively to unseen scenarios and exhibit zero-shot generalization capabilities. In order to simulate the above described challenges, we introduce a new 2D environment for an agent to jointly learn visual and textual modalities. | **Paper Summary**
The paper studies the problem of navigating to a target object in a 2D grid environment by following given natural language description as well as receiving visual information as raw pixels. The proposed architecture consists of a convoutional neural network encoding visual input, gated recurrent unit encoding natural language descriptions, an attention mechanism fusing multimodal input, and a policy learning network. To verify the effectiveness of the proposed framework, a new environment is proposed. The environment is 2-D grid based and it consists of an agent, a list of objects with different attributes, and a list of obstacles. Agents perceive the environment throught raw pixels with a limited visible region, and they can perform actions to move in the environment to reach target objects.
The problem has been studied for a while and therefore it is not novel. The proposed framework is incremental. The proposed environment is trivial and therefore it is unclear if the proposed framework is able to scale up to a more complicated environment. The experiemental results do not support several claims stated in the paper. Overall, I would vote for rejection.
- This paper solves the problem of navigating to the target object specified by language instruction in a 2D grid environment. It requires understanding of language, language grounding for visual features, and navigating to the target object while avoiding non-target objects. An attention mechanism is used to map a language instruction into a set of 1x1 convolutional filters which are intended to distinguish visual features described in the instruction from others. The experimental results show that the proposed method performs better than other methods.
- This paper presents an end-to-end trainable model to navigate an agent through visual sources and natural language instructions. The model utilizes a proposed attention mechanism to draw correlation between the objects mentioned in the instructions with deep visual representations, without requiring any prior knowledge about these inputs. The experimental results demonstrate the effectiveness of the learnt textual representation and the zero-shot generalization capabilities to unseen scenarios.
**Paper Strengths**
- The paper proposes an interesting task which is a navigation task with language instructions. This is important yet relatively unxplored.
- The implementation details are included, including optimizers, learning rates with weight decayed, numbers of training epochs, the discount factor, etc.
- The attention mechanism used in the paper is reasonable and the learned language embedding clearly shows meaningful relationships between instructions.
- The learnt textual representation follows vector arithmetic, which enables the agent to perceive unseen instructions as a new combination of the attributes and perform zero-shot generalization.
**Paper Weaknesses**
- The problem of following natural language descriptions together with visual representations of environments is not completely novel. For example, Both the problem and the proposed method are similar to those already introduced in the Gated Attention method (Chaplot et al., 2017). Although the proposed method performs better than the prior work, the approach is incremental.
- The proposed environment is simple. The vocabulary size is 40 and the longest instruciton only consists of 9 words. Whether the proposed framework is able to deal with more complicated environments is not clear. The experimental results shown in Figure 5 is not convincing that the proposed method only took less than 20k iterations to perform almost perfectly. The proposed environment is small and simple compared to the related work. It would be better to test the proposed method in a similar scale with the existing 3D navigation environments (Chaplot et al., 2017 and Hermann et al., 2017).
- The novelty of the proposed framework is unclear. This work is not the first one which proposes the multimodal fusion network incorporating a CNN achitecture dealing with visual information and a GRU architecture encoding language instructions. Also, the proposed attention mechanism is an obvious choice.
- The shown visualized attention maps are not enough to support the contribution of proposing the attention mechanism. It is difficult to tell whether the model learns to attend to correct objects. Also, the effectiveness of incorporating the attention mechanism is unclear.
- The paper claims that the proposed framework is flexible and is able to handle a rich set of natural language descriptions. However, the experiemental results are not enough to support the claim.
- The presentaiton of the experiment is not space efficient at all.
- The reference of the related papers which fuse multimodal data (vision and language) are missing.
- Comapred to 8 pages was the suggested page limit, 13 pages is a bit too long.
- Stating captions of figures above figures is not recommended.
- It would be better to show where each 1x1 filter for multimodal fusion attends on the input image. Ideally, one filter should attend on the target object and others should attend on non-target objects. However, I wonder how RNN can generate filters to detect non-target objects given an instruction. Although Figure 6 and Figure 7 try to show insights about the proposed attention model, they don’t tell which kernel is in charge of which visual feature. Blurred attention maps in Figure 6 and 7 make it hard to interpret the behavior of the model.
- The graphs shown in the Figure 5 are hard to interpret because of their large variance. It would be better to smoothing curves, so that comparing methods clearly.
- For zero-shot generalization evaluation, there is no detail about the training steps and comparisons to other methods.
- A highly related paper (Hermann et al., 2017) is missing in the references.
- Since the instructions are simple, the model does not require attention mechanism on the textual sources. If the framework can take more complex language, might be worthwhile to try visual-text co-attention mechanism. Such demonstration will be more convincing.
- The attention maps of different attribute is not as clear as the paper stated. Why do we need several “non-target” objects highlight if one can learn to consolidate all of them?
- The interpretation of n in the paper is vague, the authors should also show qualitatively why n=5 is better than that of n=1,10. If the attention maps learnt are really focusing on different attributes, given more and more objects, shouldn’t n=10 have more information for the policy learning?
- The unseen scenario generalization should also include texture change on the grid environment and/or new attribute combinations on non-target objects to be more convincing.
- The contribution in the visual part is marginal.
** Preliminary Evaluation**
- The modality fusion technique which leads to the attention maps is an effective and seem to work well approach, however, the author should present more thorough ablated analysis. The overall architecture is elegant, but the capability of it to be extended to more complex environment is in doubt. The vector arithmetic of the learnt textual embedding is the key component to enable zero-shot generalization, while the effectiveness of this method is not convincing if more complex instructions such that it contains object-object relations or interactions are perceived by the agent. |
iclr_2018_Sy-tszZRZ | In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout. We investigate the complexity of such networks by studying the number of linear regions of the PWL function. Typically, a PWL function from a DNN can be seen as a large family of linear functions acting on millions of such regions. We directly build upon the work of Montúfar et al. (2014), Montúfar (2017), and Raghu et al. (2017) by refining the upper and lower bounds on the number of linear regions for rectified and maxout networks. In addition to achieving tighter bounds, we also develop a novel method to perform exact enumeration or counting of the number of linear regions with a mixed-integer linear formulation that maps the input space to output. We use this new capability to visualize how the number of linear regions change while training DNNs. | Paper Summary:
This paper looks at providing better bounds for the number of linear regions in the function represented by a deep neural network. It first recaps some of the setting: if a neural network has a piecewise linear activation function (e.g. relu, maxout), the final function computed by the network (before softmax) is also piecewise linear and divides up the input into polyhedral regions which are all different linear functions. These regions also have a correspondence with Activation Patterns, the active/inactive pattern of neurons over the entire network. Previous work [1], [2], has derived lower and upper bounds for the number of linear regions that a particular neural network architecture can have. This paper improves on the upper bound given by [2] and the lower bound given by [1]. They also provide a tight bound for the one dimensional input case. Finally, for small networks, they formulate finding linear regions as solving a linear program, and use this method to compute the number of linear regions on small networks during training on MNIST
Main Comments:
The paper is very well written and clearly states and explains the contributions. However, the new bounds proposed (Theorem 1, Theorem 6), seem like small improvements over the previously proposed bounds, with no other novel interpretations or insights into deep architectures. (The improvement on Zaslavsky's theorem is interesting.) The idea of counting the number of regions exactly by solving a linear program is interesting, but is not going to scale well, and as a result the experiments are on extremely small networks (width 8), which only achieve 90% accuracy on MNIST. It is therefore hard to be entirely convinced by the empirical conclusions that more linear regions is better. I would like to see the technique of counting linear regions used even approximately for larger networks, where even though the results are an approximation, the takeaways might be more insightful.
Overall, while the paper is well written and makes some interesting points, it presently isn't a significant enough contribution to warrant acceptance.
[1] On the number of linear regions of Deep Neural Networks, 2014, Montufar, Pascanu, Cho, Bengio
[2] On the expressive power of deep neural networks, 2017, Raghu, Poole, Kleinberg, Ganguli, Sohl-Dickstein |
iclr_2018_B1lMMx1CW | Workshop track -ICLR 2018 THE EFFECTIVENESS OF A TWO-LAYER NEURAL NET- WORK FOR RECOMMENDATIONS
We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music. It produces recommendations based on customer's implicit feedback history such as purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and auto-encoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories, and we observe significant improvements in an online A/B test. We also discuss key enhancements to the neural network model and describe our production pipeline. Finally we open-sourced our deep learning library which supports multi-gpu model parallel training. This is an important feature in building neural network based recommenders with large dimensionality of input and output data. | The paper proposes a new neural network based method for recommendation.
The main finding of the paper is that a relatively simple method works for recommendation, compared to other methods based on neural networks that have been recently proposed.
This contribution is not bad for an empirical paper. There's certainly not that much here that's groundbreaking methodologically, though it's certainly nice to know that a simple and scalable method works.
There's not much detail about the data (it is after all an industrial paper). It would certainly be helpful to know how well the proposed method performs on a few standard recommender systems benchmark datasets (compared to the same baselines), in order to get a sense as to whether the improvement is actually due to having a better model, versus being due to some unique attributes of this particular industrial dataset under consideration. As it is, I am a little concerned that this may be a method that happens to work well for the types of data the authors are considering but may not work elsewhere.
Other than that, it's nice to see an evaluation on real production data, and it's nice that the authors have provided enough info that the method should be (more or less) reproducible. There's some slight concern that maybe this paper would be better for the industry track of some conference, given that it's focused on an empirical evaluation rather than really making much of a methodological contribution. Again, this could be somewhat alleviated by evaluating on some standard and reproducible benchmarks. |
iclr_2018_r1BRfhiab | We consider neural network training, in applications in which there are many possible classes, but at test-time, the task is to identify only whether the given example belongs to a specific class, which can be different in different applications of the classifier. For instance, this is the case in an image search engine. We consider the Single Logit Classification (SLC) task: training the network so that at test-time, it would be possible to accurately identify if the example belongs to a given class, based only on the output logit for this class. We propose a natural principle, the Principle of Logit Separation, as a guideline for choosing and designing losses suitable for the SLC. We show that the cross-entropy loss function is not aligned with the Principle of Logit Separation. In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle. In total, we study seven loss functions. Our experiments show that indeed in almost all cases, losses that are aligned with Principle of Logit Separation obtain a 20%-35% relative performance improvement in the SLC task, compared to losses that are not aligned with it. We therefore conclude that the Principle of Logit Separation sheds light on an important property of the most common loss functions used by neural network classifiers. Tensorflow code for optimizing the new batch losses will be made publicly available upon publication; A URL will be provided in the publication version of this manuscript. | The paper addresses the problem of a mismatch between training classification loss and a loss at test time. This is motivated by use cases in which multiclass classification problems are learned during training, but where binary or reduced multi-class classifications is performed at test time. The question for me is the following: if at test time, we have to solve "some" binary classification task, possibly drawn at random from a set of binary problems (this is not made precise in the paper), then why not optimize the same classification error or a surrogate loss at training time? Instead, the authors start with a multiclass problem, which may introduce a computational burden. when the number of classes is large as one needs to compute a properly normalized softmax. The authors now seem to ask, what if one were to use a multi-classification loss at training time, but then decides at test time that a binary classification of one-vs-all is asked for.
If one buys into the relevance of the setting, then of course, one is faced with the problem that the multiclass logits (aka raw scores) may not be calibrated to be used for binary classification by applying a fixed threshold. The authors call this sententiously "Principle of logit separation". Not too surprisingly, the standard multiclass losses do not have the desired property, however approaches that reduce multi-class to binary classification at training time do, namely unnormalized models with penalized log Z (self-normalization), the NCE approach, as well as (the natural in the proposed setting) binary classification loss. I find this almost a bit circular in the line of argumentation, but ok. It remains odd that while usually one has tried to reduce multiclass to binary, the authors go the opposite direction.
The main technical contribution of the paper is the batch-nornalization that makes sure that multiclass logits across mini-batches of data are better calibrated. One can almost think of that as an additional regularization. This seems interesting and does not create much overhead, if one applies mini-batched SGD optimization anyway. However, I feel this technique would need to be investigated with regard to general improvements in a multiclass setting and as such also benchmarked relative to other methods that could be applied. |
iclr_2018_SJJySbbAZ | TRAINING GANS WITH OPTIMISM
We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous noregret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam. | This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games. Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well. The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training. They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10.
This paper is relatively well-written and clear, and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games. Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.The issue of last iterate versus average iterate for non convex-concave problems is also presented well.
The theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013. Moreover, the result is only at best a motivation for using OMD in WGAN training since the WGAN optimization problem is not a bilinear game.
The experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, although they seem less compelling than the prior discussion in the paper would suggest. In particular, they are matched by SGD with momentum when evaluated by last epoch performance (albeit while being less sensitive to learning rates). OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, but there doesn't seem to be even an attempt at explaining this in the paper.
I found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5. This gives the impression that the experimental results might have been at least slightly "gamed".
For the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality.
Specific comments:
Page 1, "no-regret dynamics in zero-sum games can very often lead to limit cycles": I don't think limit cycles are actually ever formally defined in the entire paper.
Page 3, "standard results in game theory and no-regret learning": These results should be either proven or cited.
Page 3: Don't the parameter spaces need to be bounded for these convergence results to hold?
Page 4, "it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm": For completeness, this should probably either be (quickly) proven or a reference should be provided.
Page 5, "the unique equilibrium of the above game is...for the discriminator to choose w=0": Why is w=0 necessary here?
Page 6, "We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A": Why is this true? This should either be proven or cited.
Page 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result. In the Initialization section, it appears simply to be out of convenience.
Page 6, Theorem 1: It should be explicitly stated that this result doesn't provide a convergence rate, in contrast to the existing OMD results cited in the paper.
Page 7, "we considered momentum, Nesterov momentum and AdaGrad": Why isn't Adam used in this section if it is used in later experiments?
Page 7-8, "When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.": Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set? None of the theoretical arguments presented earlier in the paper seem to even hint at this. The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch. However, this doesn't even really hold, since there exist learning rates in which SGD with momentum matches the performance of SOMD.
Page 8, "Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants": Learning rate sensitivity doesn't seem to be touched upon in the earlier discussion. Can these results be explained by theory?
Page 8, "we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training": These results don't mean much without error bars.
Page 8, "we only trained the discriminator once after one iteration of generator training. The latter is inline with the intuition behind the use of optimism....": Why didn't this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times?
After reading the response of the authors (in particular their clarification of some technical results and the extra experiments they carried out during the rebuttal period), I have decided to upgrade my rating of the paper from a 6 to a 7. Just as a note, Figure 3b is now very difficult to read. |
iclr_2018_S1Auv-WRZ | Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation (Krizhevsky et al., 2012) alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13% increase in accuracy in the low-data regime experiments in Omniglot (from 69% to 82%), EMNIST (73.9% to 76%) and VGG-Face (4.5% to 12%); in Matching Networks for Omniglot we observe an increase of 0.5% (from 96.9% to 97.4%) and an increase of 1.8% in EMNIST (from 59.5% to 61.3%). | This paper proposes a conditional Generative Adversarial Networks that is used for data augmentation. In order to evaluate the performance of the proposed model, they use Omniglot, EMNIST, and VGG-Faces datasets and uses in the meta-learning task and standard classification task in the low-data regime. The paper is well-written and consistent.
Even though this paper learns to do data-augmentation (which is very interesting ) rather than just simply applies some standard data augmentation techniques and shows improvements in some tasks, I am not convinced about novelty and originality of this paper, especially on the model side. To be more specific, the paper uses the previously proposed conditional GAN as the main component of their model. And for the one-shot learning tasks, it only trains the previously proposed models with these newly augmented data.
In addition, there are some other works that used GAN as a method for some version of data augmentation:
- RenderGAN: Generating Realistic Labeled Data
https://arxiv.org/abs/1611.01331
-Data Augmentation in Emotion Classification Using Generative Adversarial Networks
https://arxiv.org/abs/1711.00648
It is fair to say that their model shows improvement on the above tasks but this improvement comes with a cost of training of GAN network.
In summary, the idea of the paper is very interesting to learn data-augmentation but yet I am not convinced the current paper has enough novelty and contribution and see the contribution of paper as on more the application side rather than on model and problem side. That said I'd be happy to hear the argument of the author about my comments. |
iclr_2018_HyI5ro0pW | Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency. | The paper proposes to make the inner layers in a neural network be block diagonal, mainly as an alternative to pruning. The implementation of this seems straightforward, and can be done either via initialization or via pruning on the off-diagonals. There are a few ideas the paper discusses:
(1) compared to pruning weight matrices and making them sparse, block diagonal matrices are more efficient since they utilize level 3 BLAS rather than sparse operations which have significant overhead and are not "worth it" until the matrix is extremely sparse. I think this case is well supported via their experiments, and I largely agree.
(2) that therefore, block diagonal layers lead to more efficient networks. This point is murkier, because the paper doesn't discuss possible increases in *training time* (due to increased number of iterations) in much detail. At if we only care about running the net, then reducing the time from 0.4s to 0.2s doesn't seem to be that useful (maybe it is for real-time predictions? Please cite some work in that case)
(3) to summarize points (1) and (2), block diagonal architectures are a nice alternative to pruned architectures, with similar accuracy, and more benefit to speed (mainly speed at run-time, or speed of a single iteration, not necessarily speed to train)
[as I am not primarly a neural net researcher, I had always thought pruning was done to decrease over-fitting, not to increase computation speed, so this was a surprise to me; also note that the sparse matrix format can increase runtime if implemented as a sparse object, as demonstrated in this paper, but one could always pretend it is sparse, so you never ought to be slower with a sparse matrix]
(4) there is some vague connection to random matrices, with some limited experiments that are consistent with this observation but far from establish it, and without any theoretical analysis (Martingale or Markov chain theory)
This is an experimental/methods paper that proposes a new algorithm, explained only in general details, and backs up it up with two reasonable experiments (that do a good job of convincing me of point (1) above). The authors seem to restrict themselves to convolutional networks in the first paragraph (and experiments) but don't discuss the implications or reasons of this assumption. The authors seem to understand the literature well, and not being an expert myself, I have the impression they are doing a fair job.
The paper could have gone farther experimentally (or theoretically) in my opinion. For example, with sparse and block diagonal matrices, reducing the size of the matrix to fit into the cache on the GPU must obviously make a difference, but this did not seem to be investigated. I was also wondering about when 2 or more layers are block sparse, do these blocks overlap? i.e., are they randomly permuted between layers so that the blocks mix? And even with a single block, does it matter what permutation you use? (or perhaps does it not matter due to the convolutional structure?)
The section on the variance of the weights is rather unclear mathematically, starting with the abstract and even continuing into the paper. We are talking about sample variance? What does DeltaVar mean in eq (2)? The Marchenko-Pastur theorem seemed to even be imprecise, since if y>1, then a < 0, implying that there is a nonzero chance that the positive semi-definite matrix XX' has a negative eigenvalue.
I agree this relationship with random matrices could be interesting, but it seems too vague right now. Is there some central limit theorem explanation? Are you sure that you've run enough iterations to fully converge? (Fig 4 was still trending up for b1=64). Was it due to the convolutional net structure (you could test this)? Or, perhaps train a network on two datasets, one which is not learnable (iid random labels), and one which is very easily learnable (e.g., linearly separable). Would this affect the distributions?
Furthermore, I think I misunderstood parts, because the scaling in MNIST and CIFAR was different and I didn't see why (for MNIST, it was proportional to block size, and for CIFAR it was independent of block size almost).
Minor comment: last paragraph of 4.1, comparing with Sindhwani et al., was confusing to me. Why was this mentioned? And it doesn't seem to be comparable. I have no idea what "Toeplitz (3)" is. |
iclr_2018_rylejExC- | Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have any convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop a preprocessing strategy and two control variate based algorithms to further reduce the receptive field size. Our algorithms are guaranteed to converge to GCN's local optimum regardless of the neighbor sampling size. Empirical results show that our algorithms have a similar convergence speed per epoch with the exact algorithm even using only two neighbors per node. The time consumption of our algorithm on the Reddit dataset is only one fifth of previous neighbor sampling algorithms. | The paper proposes a method to speed up the training of graph convolutional networks, which are quite slow for large graphs. The key insight is to improve the estimates of the average neighbor activations (via neighbor sampling) so that we can either sample less neighbors or have higher accuracy for the same number of sampled neighbors. The idea is quite simple: estimate the current average neighbor activations as a delta over the minibatch running average. I was hoping the method would also include importance sampling, but it doesn’t. The assumption that activations in a graph convolution are independent Gaussians is quite odd (and unproven).
Quality: Statistically, the paper seems sound. There are some odd assumptions (independent Gaussian activations in a graph convolution embedding?!?) but otherwise the proposed methodology is rather straightforward.
Clarity: It is well written and the reader is able to follow most of the details. I wish the authors had spent more time discussing the independent Gaussian assumption, rather than just arguing that a graph convolution (where units are not interacting through a simple grid like in a CNN) is equivalent to the setting of Wang and Manning (I don’t see the equivalence). Wang and Manning are looking at MLPs, not even CNNs, which clearly have more independent activations than a CNN or a graph convolution.
Significance: Not very significant. The problem of computing better averages for a specific problem (neighbor embedding average) seems a bit too narrow. The solution is straightforward, while some of the approximations make some odd simplifying assumptions (independent activations in a convolution, infinitesimal learning rates).
Theorem 2 is not too useful, unfortunately: Showing that the estimated gradient is asymptotically unbiased with learning rates approaching zero over Lipchitz functions does not seem like an useful statement. Learning rates will never be close enough to zero (specially for large batch sizes). And if the running activation average converges to the true value, the training is probably over. The method should show it helps when the values are oscillating in the early stages of the training, not when the training is done near the local optimum. |
iclr_2018_HkuGJ3kCb | ALL-BUT-THE-TOP: SIMPLE AND EFFECTIVE POST- PROCESSING FOR WORD REPRESENTATIONS
Real-valued word representations have transformed NLP applications; popular examples are word2vec and GloVe, recognized for their ability to capture linguistic regularities. In this paper, we demonstrate a very simple, and yet counter-intuitive, postprocessing technique -eliminate the common mean vector and a few top dominating directions from the word vectors -that renders off-the-shelf representations even stronger. The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and text classification) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones. | This paper proposes a simple post-processing technique for word representations designed to improve representational quality and performance on downstream tasks. The procedure involves mean subtraction followed by projecting out the first D principle directions and is motivated by improving isotropy of the partition function. Extensive empirical analysis supports the efficacy of the approach.
The idea of post-processing word embeddings to improve their performance is not new, but I believe the specific procedure and its connection to the concept of isotropy has not been investigated previously. Relative to other post-processing techniques, this method has a fair amount of theoretical justification, particularly as described in Appendix A. I think the experiments are reasonably comprehensive. All told, I think this is a good paper, but I do have some comments and questions that I think should be addressed before publication.
1) I think it is useful to analyze the distribution of singular values of the matrix of word vectors. However, I did not find the heuristic analysis based on the visual appearance of these distributions to be convincing. For example, in Fig. 1, it is not clear to me that there exists a separation between regimes of exponential decay and rough constancy. It would be ideal if a more quantitative metric is established that captures the main qualitative behavior alluded to here.
Furthermore, the vocabulary size is likely to have a strong effect on the shape of the distributions. Are the plots in Fig. 4 for the same vocabulary size? Related to this, the dimensionality of the representation will have a strong effect on the shape, and this should be controlled for in Fig. 8. One way to do this would be to instead plot the density of singular values. Finally, for the Gaussian matrix simulations, in the asymptotic limit, the density of singular values depends only on the ratio of dimensions, i.e. the vector dimension to the vocabulary size. Fig. 4/8 might be more revealing if this ratio were controlled for.
2) It would be useful to describe why isotropy of the partition function is the goal, as opposed to isotropy of the vectors themselves. This may be argued in Arora et al. (2016), but summarizing that argument in this paper would be helpful. In fact, an additional experiment that would be very valuable would be to investigate empirically which form of isotropy is more effective in governing performance. One way to do this would be to enforce approximate isotropy of the partition function without also enforcing isotropy of the vectors themselves. Practically speaking, one might imagine doing this by requiring I = 1 to second order without also requiring that the mean vanish. I think this would allow for \sigma_max > \sigma_min while still satisfying I = 1 to second order. (But this is just off the top of my head -- there may be better ways to conduct this experiment).
It is not clear to me why the experiment leading to Table 2 is a good proxy for the exact computation of I. It would be great if there were some mathematical justification for this approximation.
Why does Fig. 3 use D=10, 20 when much smaller D are considered elsewhere? Also I think a log scale on the x-axis might be more informative.
3) It would be good to mention other forms of post-processing, especially in the context of word similarity. For example, in the original paper, GloVe advocates averaging the target and context vector representations, and normalizing across the feature dimension before computing cosine similarity.
4) I think it's likely that there is a strong connection between the optimal value of D and the frequency distribution of words in the evaluation dataset. While the paper does mention that D may depend on specifics of the dataset, etc., I would expect frequency-dependence to be the main factor, and it might be worth exploring this effect explicitly. |
iclr_2018_S1pWFzbAW | The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496× with the same model accuracy. This results in up to a 1.51× improvement over the state-of-the-art. | This paper proposes an interesting approach to compress the weights of a network for storage or transmission purposes. My understanding is, at inference, the network is 'recovered' therefore there is no difference in processing time (slight differences in accuracy due to the approximation in recovering the weights).
- The idea is nice although it's applicability is limited as it is only for distribution of the model and storing (is storage really a problem?).
Method:
- the idea of using the Bloomier filter is new to me. However, the paper is miss-leading as the filtering is a minor part of the complete process. The paper introduces a complete pipeline including quantization, and pruning to maximize the benefits of the filter and an additional (optional) step to achieve further compression.
- The method / idea seems simply and easy to reproduce (except the subsequent steps that are not clearly detailed).
Clarity
- The paper could improve its clarity. At the moment, the Bloomier is the core but needs many other components to make it effective. Those components are not detailed to the level of being reproducible.
- One interesting point is the self-implementation of the Deep compression algorithm. The paper claims this is a competitive representation as it achieves better compression than the original one. However, those numbers are not clear in tables (only in table 3 numbers seem to be equivalent to the ones in the text). This needs clarification, CSR achieves 81.8% according to Table 2 and 119 according to the text.
Results:
- Current results are interesting. However I have several concerns:
1) it is not clear to me why assuming similar performance. While Bloomier is weightless the complete process involves many retraining steps involving performance loss. Analysis on this would be nice to see (I doubt it ends exactly at the same number). Section 3 explicitly suggest there is the need of retraining to mitigate the effect of false positives which is then increased with pruning and quantization. Therefore, would be nice to see the impact in accuracy (even it is not the main focus of the work).
2) Resutls are focused on fully connected layers which carry (for the given models) the larger number of weights (and therefore it is easy to get large compression numbers). What would happen in newer models where the fully connected layer is minimal compared to conv. layers? What about the accuracy impact there? Let's say in a Resnet-34.
3) I would like to see further analysis on why Bloomier filter encoding improves accuracy (or is a typo and meant to be error?) by 2%. This is a large improvement without training from scractch.
4) It is interesting to me how the retraining process is 'hidden' all over the paper. At the beginning it is claimed that it takes about one hour for VGG-16 to compute the Bloomier filters. Howerver, that is only a minimal portion of the entire pipeline. Later in the experimental section it is mentioned that 'tens of epochs' are needed for retraining (assuming to compensate for errors) after retraining for compensating l1 pruning?.... tens of epochs is a significant portion of the entire training process assuming VGG is trained for 90epochs max.
5) Interestingly, as mentioned in the paper, this is 'static compression'. That is, the model needs to be completely 'restored' before inference. This is miss-leading as an embedded device will need the same requirements as any other at inferece time(or maybe I am missing something). That is, the benefit is mainly for storing and transmission.
6) I would like to see the sensibility analysis with respect to t and the number of clusters.
7) As mentioned before, LeNet is great but would be nice to see more complicated models (even resnet on CIFAR). These models are not only large in terms of parameters but also quite sensitive to modifications in the weight structure.
8) Results are focused on a single layer. What happens if all the layers are considered at the same time? Here I am also concerned about the retraining process (fixing one layer and retraining the deeper ones). How is this done using only fully connected layers? What is the impact of doing it all over the network (let's say VGG-16 from the first convolutional layer to the very last).
Summary:
All in all, the idea has potential but there are many missing details. I would like to see clearer and more comprehensive results in terms of modern models and in the complete model, not only in the FC layer, including accuracy impact. |
iclr_2018_SkAK2jg0b | Transfer learning for feature extraction can be used to exploit deep representations in contexts where there is very few training data, where there are limited computational resources, or when tuning the hyper-parameters needed for training is not an option. While previous contributions to feature extraction propose embeddings based on a single layer of the network, in this paper we propose a full-network embedding which successfully integrates convolutional and fully connected features, coming from all layers of a deep convolutional neural network. To do so, the embedding normalizes features in the context of the problem, and discretizes their values to reduce noise and regularize the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used for obtaining the initial features. The performance gap in classification accuracy between thoroughly tuned solutions and the full-network embedding is also reduced, which makes of the proposed approach a competitive solution for a large set of applications. | The paper addresses the scenario when using a pretrained deep network as learnt feature representation for another (small) task where retraining is not an option or not desired. In this situation it proposes to use all layers of the network to extract feature from, instead of only one layer.
Then it proposes to standardize different dimensions of the features based on their response on the original task. Finally, it discretize each dimension into {-1, 0, 1} to compress the final concatenated feature representation.
Doing this, it shows improvements over using a single layer for 9 target image classification datasets including object, scene, texture, material, and animals.
The reviewer does not find the paper suitable for publication at ICLR due to the following reasons:
- The paper is incremental with limited novelty.
- the results are not encouraging
- the pipeline of standardization, discretization is relatively costly, the final feature vector still large.
- combining different layers, as the only contribution of the paper, has been done in the literature before, for instance:
“The Treasure beneath Convolutional Layers: Cross-convolutional-layer Pooling
for Image Classification” CVPR 2016 |
iclr_2018_SyKoKWbC- | In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over recent state-of-the-art. | The paper proposes to replace single-sample discriminators in adversarial training with discriminators that explicitly operate on distributions of examples, so as to incentivize the generator to cover the full distribution of the training data and not collapse to isolated modes.
The idea of avoiding mode collapse by providing multiple samples to the discriminator is not new; the paper acknowledges prior work on minibatch discrimination but does not really describe the differences with previous work in any technical detail. Not being highly familiar with this literature, my reading is that the scheme in this paper grounds out into a somewhat different architecture than previous minibatch discriminators, with a nice interpretation in terms of a sample-based approximation to a neural mean embedding. However the paper does not provide any empirical evidence that their approach actually works better than previous approaches to minibatch discrimination. By comparing only to one-sample discriminators it leaves open the (a priori quite plausible) possibility that minibatch discrimination is generally a good idea but that other architectures might work equally well or better, i.e., the experiments do not demonstrate that the MMD machinery that forms the core of the paper has any real purchase.
The paper also proposes a two-sample objective DAN-2S, in which the discriminator is asked to classify two sets of samples as coming from the same or different distributions. This is an interesting approach, although empirically it does not appear to have any advantage over the simpler DAN-S -- do the authors agree with this interpretation? If so it is still a worthwhile negative result, but the paper should make this conclusion explicit. Alternately if there are cases when the two-sample test is actually recommended, that should be made explicit as well.
Overall this paper seems borderline -- a nice theoretical story, grounding out into a simple architecture that does seem to work in practice (the domain adaptation results are promising), but with somewhat sloppy writing and experimentation that doesn't clearly demonstrate the value of the proposed approach. I hope the authors continue to improve the paper by comparing to other minibatch discrimination techniques. It would also be helpful to see value on a real-world task where mode collapse is explicitly seen as a problem (and/or to provide some intuition for why this would be the case in the Amazon reviews dataset).
Specific comments:
- Eqn (2.2) is described as representing the limit of a converged discriminator, but it looks like this is just the general gradient of the objective --- where does D* enter into the picture?
- Fig 1: the label R is never explained; why not just use P_x?
- Section 5.1 "we use the pure distributional objective for DAN (i.e., setting λ != 0 in (3.5))" should this be λ = 0?
- "Results" in the domain adaptation experiments are not clearly explained -- what do the reported numbers represent? (presumably accuracy(stddev) but the figure caption should say this). It is also silly to report accuracy to 2 decimal places when they are clearly not significant at that level. |
iclr_2018_Hk0wHx-RW | Published as a conference paper at ICLR 2018 LEARNING SPARSE LATENT REPRESENTATIONS WITH THE DEEP COPULA INFORMATION BOTTLENECK
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data. | This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset.
(significance) This is a promising idea. This paper builds on the information theoretic perspective of representation learning, and makes progress towards characterizing what makes for a good representation. Invariance to transforms of the marginal distributions is clearly a useful property, and the proposed method seems effective in this regard.
Unfortunately, I do not believe the paper is ready for publication as it stands, as it suffers from lack of clarity and the experimentation is limited in scope.
(clarity) While Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the “implicit form” are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? There are also many missing details in the experimental section: how were the number of “active” components selected ? Which versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly. I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.
(quality) The experiments are interesting and seem well executed. Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. Similarly, the representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.
Pros:
* Theoretically well motivated
* Promising results on synthetic task
* Potential for impact
Cons:
* Paper suffers from lack of clarity (method and experimental section)
* Lack of ablative / introspective experiments
* Weak empirical results (small or toy datasets only). |
iclr_2018_BkDB51WR- | We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast. A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function. We use a softmax layer for a numerical discretization of a smooth probability density functions, which transforms a function approximation problem to a classification task. Explicit and implicit regularization strategies are introduced to impose a smoothness condition on the estimated probability distribution. A Monte Carlo procedure to compute the temporal evolution of the distribution for a multiple-step forecast is presented. The evaluation of the proposed algorithm on three synthetic and two real data sets shows advantage over the compared baselines. | Interesting ideas that extend LSTM to produce probabilistic forecasts for univariate time series, experiments are okay. Unclear if this would work at all in higher-dimensional time series. It is also unclear to me what are the sources of the uncertainties captured.
The author proposed to incorporate 2 different discretisation techniques into LSTM, in order to produce probabilistic forecasts of univariate time series. The proposed approach deviates from the Bayesian framework where there are well-defined priors on the model, and the parameter uncertainties are subsequently updated to incorporate information from the observed data, and propagated to the forecasts. Instead, the conditional density p(y_t|y_{1:t-1|, \theta}) was discretised by 1 of the 2 proposed schemes and parameterised by a LSTM. The LSTM was trained using discretised data and cross-entropy loss with regularisations to account for ordering of the discretised labels. Therefore, the uncertainties produced by the model appear to be a black-box. It is probably unlikely that the discretisation method can be generalised to high-dimensional setting?
Quality: The experiments with synthetic data sufficiently showed that the model can produce good forecasts and predictive standard deviations that agree with the ground truth. In the experiments with real data, it's unclear how good the uncertainties produced by the model are. It may be useful to compare to the uncertainty produced by a GP with suitable kernels. In Fig 6c, the 95pct CI looks more or less constant over time. Is there an explanation for that?
Clarity: The paper is well-written. The presentations of the ideas are pretty clear.
Originality: Above average. I think the regularisation techniques proposed to preserve the ordering of the discretised class label are quite clever.
Significance: Average. It would be excellent if the authors can extend this to higher dimensional time series.
I'm unsure about the correctness of Algorithm 1 as I don't have knowledge in SMC. |
iclr_2018_ry8dvM-R- | Published as a conference paper at ICLR 2018 ROUTING NETWORKS: ADAPTIVE SELECTION OF NON-LINEAR FUNCTIONS FOR MULTI-TASK LEARN- ING
Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network -for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we obtain cross-stitch performance levels with an 85% reduction in training time. | Summary:
The paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply. This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task. It is proposed to learn the modules with standard back propagation and the controller with reinforcement learning techniques, mostly tabular.
- page 4:
In algorithm 2, line 6, I do not understand the reward computation. It seems that either a _{k+1} subscript index is missing for the right hand side R, or an exponent of n-k is missing on \gamma. In the current formula, the final reward affects all decisions without a decay based on the distance between action and reward gain. This issue should be corrected or explicitly stated.
The ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again. This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing. Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage.
- Page 5: It is not described clearly (and better: defined formally) what exactly is the state representation. It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods. If I understand correctly, the state representation used in the tabular algorithms includes only the current depth. If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output.
- The functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented. In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized.
- The WPL algorithm is not clear to me
o In algorithm box 3, what is R_k? I do not see it defined anywhere. Is it related to \hat{R}? how?
o Is it assumed that the actions are binary?
o I do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability. What is the source of a-symmetry between positive and negative gradients?
- Page 6:
o It is not clear why MNist is tested over 200 examples, where there is a much larger test set available
o In MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions?
- Page 7:
The results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer. If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization.
- Page 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks. However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior.
Overall:
- The paper is highly unclear in its method representation
o There is no unified clear notation. The essential symbols (states, actions, rewards) are not formally defined, and often it is not clear even if they are integers, scalars, or vectors. In notation existing, there are occasional notation errors.
o The reward is a) not clear, and b) not well motivated when it is explained, and c) not explicitly stated anywhere: it is said that the action-specific reward may be up to 10 times larger than the final reward, but the actual tradeoff parameter between them is not stated. Note that this parameter is important, as using a 10-times larger action-related reward means that the classification-related reward becomes insignificant.
o The state representation used is not clear, and if I understand correctly, it includes only the current depth. This is a severely limited state representation, which does not enable to learn actions based on intermediate results
o The continuous versions of the RL algorithms are not explained at all: no state representation, nor optimization is described.
o The presentation suffers from severe over-generalization and lack of clarity, which disabled my ability to understand the network and algorithms for a specific case. Instead, I would recommend that in future versions of this document a single network, with a specific router and set of decisions, and with a single algorithm, will be explained with clear notation end-to-end
Beyond the clarity issues, I suspect also that the novelty is minor (if the state does not include any information about the current output) and that the empirical baseline is lacking. However, it is hard to judge these due to lack of clarity.
After revision:
- Most of the clarity issues were handled well, and the paper now read nicely
- It is now clear that routing is not done based on the current input (an example is not dynamically routed based on its current representation). Instead routing depends on the task and depth only. This is still interesting, but is far from reaching context-dependent routing.
- The results presented are nice and show that task-dependent routing may be better than plain baseline or the stiching alternative. However, since this is a task transfer issue, I believe several data size points should be tested. For example, as data size rises, the task-specific-all-fc alternative is expected to get stronger (as with more data, related task are less required for good performance).
- |
iclr_2018_rJoXrxZAZ | This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive models for raw audio waveform generation. As an example, we propose a hybrid model that combines an autoregressive network named WaveNet and a conventional LSTM model to address speech synthesis. Instead of generating one sample per time-step, the proposed HybridNet generates multiple samples per time-step by exploiting the long-term memory utilization property of LSTMs. In the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance. HybridNet achieves a 3.83 subjective 5-scale mean opinion score on US English, largely outperforming the same size WaveNet in terms of naturalness and provide 2x speed up at inference. | This paper presents HybridNet, a neural speech (and other audio) synthesis system (vocoder) that combines the popular and effective WaveNet model with an LSTM with the goal of offering a model with faster inference-time audio generation.
Summary: The proposed model, HybridNet is a fairly straightforward variation of WaveNet and thus the paper offers a relatively low novelty. There is also a lack of detail regarding the human judgement experiments that make the significance of the results difficult to interpret.
Low novelty of approach / impact assessment:
The proposed model is based closely on WaveNet, an existing state-of-the-art vocoder model. The proposal here is to extend WaveNet to include an LSTM that will generate samples between WaveNet samples -- thus allowing WaveNet to sample at a lower sample frequency. WaveNet is known for being relatively slow at test-time generation time, thus allowing it to run at a lower sample frequency should decrease generation time. The introduction of a local LSTM is perhaps not a sufficiently significant innovation.
Another issue that lowers the assessment of the likely impact of this paper is that there are already a number of alternative mechanism to deal with the sampling speed of WaveNet. In particular, the cited method of Ramachandran et al (2017) uses caching and other tricks to achieve a speed up of 21 times over WaveNet (compared to the 2-4 times speed up of the proposed method). The authors suggest that these are orthogonal strategies that can be combined, but the combination is not attempted in this paper. There are also other methods such as sampleRNN (Mehri et al. 2017) that are faster than WaveNet at inference time. The authors do not compare to this model.
Inappropriate evaluation:
While the model is motivated by the need to reduce the generation of WaveNet sampling, the evaluation is largely based on the quality of the sampling rather than the speed of sampling. The results are roughly calibrated to demonstrate that HybridNet produces higher quality samples when (roughly) adjusted for sampling time. The more appropriate basis of comparison is to compare sample time as a function of sample quality.
Experiments:
Few details are provided regarding the human judgment experiments with Mechanical Turkers. As a result it is difficulty to assess the appropriateness of the evaluation and therefore the significance of the findings. I would also be much more comfortable with this quality assessment if I was able to hear the samples for myself and compare the quality of the WaveNet samples with HybridNet samples. I will also like to compare the WaveNet samples generated by the authors' implementation with the WaveNet samples posted by van den Oord et al (2017).
Minor comments / questions:
How, specifically, is validation error defined in the experiments?
There are a few language glitches distributed throughout the paper. |
iclr_2018_rJSr0GZR- | Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task. | This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a "code generator" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more expressive and complicated.
Pros:
- The proposed idea is simple and easy to implement
- The results show improvement in terms of visual quality
Cons:
- I agree that the proposed prior should better capture the data distribution. However, incorporating a generic prior over the latent space plays a vital role as regularisation, this helps avoid model collapse. Adding a complicated code generation network brings too much flexibility for the prior part. This makes the prior and posterior learnable, which makes it easier to fool the regularisation discriminator (think about the latent code and prior code collapsed to two different points). As a result, this weakens the regularisation over the latent encoder space.
- The above mentioned could be verified through qualitative results. As shown in Fig. 5. I believe this is a result due to the fact that the adversarial loss in the regularisation phase does not a significant influence there.
- I have some doubts over why AAE works so poorly when the latent dimension is 2000. How to make sure it's not a problem of implementation or the model wasn't trapped into a bad local optima / saddle points. Could you justify this?
- Contributions; this paper propose an improvement over a existing model. However, neither the idea/insights it brought can be applied onto other generative models, nor the improvement bring a significant improvement over the-state-of-the-arts. I am wondering what the community will learn from this paper, or what the author would like to claim as significant contributions. |
iclr_2018_Syx6bz-Ab | Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe-loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia that is an order of magnitude larger than comparable datasets. By applying policybased reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%. | This paper presents a new approach to support the conversion from natural language to database queries.
One of the major contributions of the work is the introduction of a new real-world benchmark dataset based on questions over Wikipedia. The scale of the data set is significantly larger than any existing ones. However, from the technical perspective, the reviewer feels this work has limited novelty and does not advance the research frontier by much. The detailed comments are listed below.
1) Limitation of the dataset: While the authors claim this is a general approach to support seq2sql, their dataset only covers simple queries in form of aggregate-where-select structure. Therefore, their proposed approach is actually an advanced version of template filling, which considers the expression/predicate for one of the three operators at a time, e.g., (Giordani and Moschitti, 2012).
2) Limitation of generalization: Since the design of the algorithms is purely based on their own WikiSQL dataset, the reviewer doubts if their approach could be generalized to handle more complicated SQL queries, e.g., (Li and Jagadish, 2014). The high complexity of real-world SQL stems from the challenges on the appropriate connections between tables with primary/foreign keys and recursive/nested queries.
3) Comparisons to existing approaches: Since it is a template-based approach in nature, the author should shrink the problem scope in their abstract/introduction and compare against existing template approaches. While there are tons of semantic parsing works, which grow exponentially fast in last two years, these works are actually handling more general problems than this submission does. It thus makes sense when the performance of semantic parsing approaches on a constrained domain, such as WikiSQL, is not comparable to the proposal in this submission. However, that only proves their method is fully optimized for their own template.
As a conclusion, the reviewer believes the problem scope they solve is much smaller than their claim, which makes the submission slightly below the bar of ICLR. The authors must carefully consider how their proposed approach could be generalized to handle wider workload beyond their own WikiSQL dataset.
PS, After reading the comments on OpenReview, the reviewer feels recent studies, e.g., (Guu et al., ACL 2017), (Mou et al, ICML 2017) and (Yin et al., IJCAI 2016), deserve more discussions in the submission because they are strongly relevant and published on peer-reviewed conferences. |
ReviewRobot Dataset
This dataset contains the raw dataset from the ReviewRobot work. It was curated by Wang et al. 2020 for the purpose of explainable peer review generation of research papers.
Dataset Details
Dataset Description
The raw research paper text (extracted using Grobid by the authors) and the peer reviews are made available here. Each paper can have multiple reviews, we only keep the longest review for each paper.
Dataset Sources [optional]
- Repository: https://github.com/EagleW/ReviewRobot/tree/master
- Paper: ReviewRobot: Explainable Paper Review Generation based on Knowledge Synthesis
Citation
BibTeX:
@inproceedings{wang-etal-2020-reviewrobot, title = "{R}eview{R}obot: Explainable Paper Review Generation based on Knowledge Synthesis", author = "Wang, Qingyun and Zeng, Qi and Huang, Lifu and Knight, Kevin and Ji, Heng and Rajani, Nazneen Fatema", booktitle = "Proceedings of the 13th International Conference on Natural Language Generation", month = dec, year = "2020", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.inlg-1.44", pages = "384--397" }
- Downloads last month
- 59