id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
iclr_2020_HygqFlBtPS | Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness. In principle, convex relaxation can provide tight bounds if the solution to the relaxed problem is feasible for the original non-convex problem. Therefore, we propose two regularizers that can be used to train neural networks that yield tighter convex relaxation bounds for robustness. In all of our experiments, the proposed regularizers result in higher certified accuracy than non-regularized baselines. | Summary:
The aim of the paper is to improve verified training. One of the problem with verified training is the looseness of the bounds employed so the authors suggest incorporating a measure of that looseness into the training loss. It is based on a reformulation of the relaxation of Weng et al.
Comments:
Page 2: "it can certify a broader class of adversaries that IBP cannot certify, like the `2 adversaries.". You can definitely use IBP to very properties against L2-adversaries. It is simply a matter of changing the way the bound is propagated through the first layer.
Page 3: It's a bit pedantic, but the convex relation of Ehlers (middle of figure 1) is not the optimal convex relaxation. It is optimal only if you assume that all ReLU non linearities are relaxed independently. See the work by Anderson et al. for some examples
Page 5, section 4: "We investigate the gap between the optimal convex relaxationin Eq. O" There is a bit of confusion in this section. Eq O is not the optimal convex relaxation, it's the hard non-convex problem.
Section 4.1 bothers me. Equation C is the relaxed version of equation O, so they are only going to be equal if there is essentially no relaxation going on. Saying that it's possible to check whether the equivalence exists is a bit weird. The only case where this can happen is if all the terms in the sum over I_i are zero, which is essentially going to mean that no ReLU is ambiguous. (or if the c W are all positives, but that would be problematic during the optimization of the intermediate bounds given that c would make them both signs then)
Page 5, section 4.2: The authors suggest minimizing d, the gap between the value of the bound obtained, and the value of forwarding the solution of the relaxation through the actual network. Essentially, this would amount to maximizing the lower bound (which all verified training already does), at the same time as minimizing the value of the margin (p_O) on a point of the neighborhood for which we want robustness (x + delta_0). Minimizing the value of the margin is the opposite of what we would want to do, so I'm not surprised by the observation of the author that this doesn't work well.
The conclusion of the section that d can not be optimized to 0 also seems quite obvious if you think about what the problem is.
Section 4.3:
"the optimal solution of C can only be on the boundary of the feasible set." -> There is a subtlety here that I think the authors don't address. The three points they identify are the only feasible optimal solutions for solving a linear program over the feasible domain given by the relaxation of one ReLU but, when solving over the whole of C, the solution needs to be on the boundary of the feasible domain of C, which is larger than those three points.
The whole section is quite convoluted and makes very strong assumption. For Proposition 1, the condition x \in S(\delta) means that all the intermediate bound in the network must have been tight (so that the actual forwarding of an x can match the upper or lower bound used in the relaxation), and that the optimal solution of the relaxation requires all intermediate points to be at either at their maximum or their minimum. The only case I can visualise for this is essentially once again the case where there are no ambiguous ReLU and the full thing is linear.
Regarding the experiments section, it would be benefical to include in table 1 the results of Gowal et al. (On the Effectiveness of Interval Bound propagation for Training Verifiably Robust Models) for better context. The paper is already cited so it should have been possible to include those numbers, which are often better than the ones reported here.
The comparison is included in table 2, when the baseline is beaten, but this is with using the training method of CROWN-IBP and it seems like most of the improvements is due to CROWN-IBP.
Typos/minor details:
Page 2: " In addition, a bound based on semi-definite programming (SDP) relaxation was developed and minimized as the objective Raghunathan et al. (2018). (Wong & Kolter, 2017) presents an upper bound" -> citation format
Page 8: "CORWN-IBP "
Opinion:
I think that the analysis section is pretty confusing and needs to be re-thought. It provides a lot of complex discussion of when the relaxation will be exact, without really identifying that it will be when you have very few ambiguous ReLU. I think that there might be a few parallels to identify between the regularizer proposed and the ReLU stability one of Xiao et al. (ICLR2019) from that aspect. The experimental results are not entirely convicing due to the lack of certain baselines. |
iclr_2020_rkx1b64Fvr | Recently, deep learning has made extraordinary achievements in text classification. However, most of present models, especially convolutional neural network (CNN), do not extract long-range associations, global representations, and hierarchical features well due to their relatively shallow and simple structures. This causes a negative effect on text classification. Moreover, we find that there are many express methods of texts. It is appropriate to design the multi-input model to improve the classification effect. But most of models of text classification only use words or characters and do not use the multi-input model. Inspired by the above points and Densenet (Huang et al. (2017)), we propose a new text classification model, which uses words, characters, and labels as input. The model, which is a deep CNN with a novel attention mechanism, can effectively leverage the input information and solve the above issues of the shallow model. We conduct experiments on six large text classification datasets. Our model achieves the state of the art results on all datasets compared to multiple baseline models. | Summary
This paper introduces a new model architecture for doing text classification. The main contribution is proposing a deeper CNN approach, using both word and character embeddings (as well as label embeddings with attention). The paper claims improved performance over baselines.
Decision
I reject the paper for 3 main reasons:
1) Very misleading claims regarding establishing a new state of the art. The baselines used for comparison don't include any of the best existing published results.
2) Lack of positioning within the literature. In particular, no mention nor discussion of Transformers (self-attention) networks, including BERT and XLNet approaches, which are the state of the art in text classification.
3) Lack of justification/explanation for the proposed architecture. One key argument made is that current models are shallow, but it appears that only CNN models are considered for that comparison. More discussion is needed to understand why the new aspects of the proposed network are importantly different from other existing approaches.
Additional details for decision
The results from this paper are significantly inferior to the best results published. With a few quick searches, I found that there are several approaches performing better than the proposed model on every dataset considered in the analysis, as you can see below.
http://nlpprogress.com/english/text_classification.html
https://github.com/sebastianruder/NLP-progress/blob/master/english/sentiment_analysis.md
https://paperswithcode.com/sota/text-classification-on-yahoo-answers
Extra notes (not factoring in decision)
- Consider spacing out the 3 rightmost blocks in Figure 1, I found the layout confusing and there's space available.
- In section 3, I would have liked more explanation for the motivation of the various design choices. |
iclr_2020_B1xDq2EFDH | Despite the impressive performance of deep neural networks (DNNs) on numerous learning tasks, they still exhibit uncouth behaviours. One puzzling behaviour is the subtle sensitive reaction of DNNs to various noise attacks. Such a nuisance has strengthened the line of research around developing and training noise-robust networks. In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input. We provide an efficient and simple approach to approximate such a regularizer for arbitrarily deep networks. This is done by leveraging the analytic expression of the output mean of a shallow neural network, avoiding the need for memory and computation expensive data augmentation. We conduct extensive experiments on LeNet and AlexNet on various datasets including MNIST, CIFAR10, and CIFAR100 to demonstrate the effectiveness of our proposed regularizer. In particular, we show that networks that are trained with the proposed regularizer benefit from a boost in robustness against Gaussian noise to an equivalent amount of performing 3-21 folds of noisy data augmentation. Moreover, we empirically show on several architectures and datasets that improving robustness against Gaussian noise, by using the new regularizer, can improve the overall robustness against 6 other types of attacks by two orders of magnitude. | This paper proposes a regularization method for achieving robustness to noisy inputs, with relatively less computation compared to standard data augmentation approaches. Specifically, the authors analyze the analytic expression of the loss on the noisy inputs, and using Jensen’s inequality, propose to minimize a surrogate loss over the expectation of noisy inputs. To minimize the loss over the expectation, the authors impose a regularization over the first moment of the network weights. The authors validate the model with the proposed regularization technique for its robustness against Gaussian attack and other types of attacks, whose results show that the model is robust.
Pros
- The general idea of the regularization that replaces the generation of noisy samples and optimization over it is conceptually appealing and seems practically useful.
- The derivation of the moment-based regularization makes sense.
- The proposed regularizer seems to be effective to a certain degree, on the sets of experiments done by the authors.
Cons
- Experimental validation seems highly inadequate due to lack of baselines. Thus it is difficult to assess the degree of robustness the proposed model achieves. The authors should perform extensive evaluation against state-of-the-art techniques against multiple types of attacks, in order to demonstrate the effectiveness of the proposed method.
- While the authors emphasize the computational efficiency of the method, the authors do not report computational cost or actual runtime.
- The types of non-Gaussian attacks should be better described. Which ones use L-infinity attacks and which use L2 attacks?
- Figure 3 doesn’t seem like a very favorable result to the proposed model, since we are generally more concerned with adversarial examples generated with small perturbations, as large perturbations may change the input semantics.
In sum, while I like the overall idea and find the work novel and potentially practical, it is difficult to properly evaluate the work due to lack of comparison against state-of-the-art data augmentation methods for achieving robustness. Therefore I temporarily give this paper a weak reject, but may change the rating with more experimental results provided in the rebuttal. |
iclr_2020_ByljMaNKwB | In many real-world applications, we want to exploit multiple source datasets of similar tasks to learn a model for a different but related target dataset -e.g., recognizing characters of a new font using a set of different fonts. While most recent research has considered ad-hoc combination rules to address this problem, we extend previous work on domain discrepancy minimization to develop a finite-sample generalization bound, and accordingly propose a theoretically justified optimization procedure. The algorithm we develop, Domain AggRegation Network (DARN), is able to effectively adjust the weight of each source domain during training to ensure relevant domains are given more importance for adaptation. We evaluate the proposed method on real-world sentiment analysis, digit recognition and object recognition datasets and show that DARN can significantly outperform the state-of-the-art alternatives. | ###Summary###
This paper tackles the multi-source domain adaptation by aggregate multiple source domains dynamically during the training phase. The observation is that in many real-world applications, we want to exploit multiple source datasets of similar tasks to learn a model for a different but related target datasets.
Firstly, the paper derives a multiple-source domain adaptation upper-bound from single-to-single domain adaptation generalization bound, based on the theoretical work from Cortes et al (2019). The idea is similar to Zhao et al (2019), which introduces a weighted parameter \alpha to combine the source domains together.
Secondly, based on the theoretical result, the paper proposes an algorithm to minimize the upper bound of the theoretical result. The upper bound can be simplified as the quartic form (Eq. 4) and can be optimized with the Lagrangian form. Since no closed-form expression for the optimal v can be derived, the authors propose to use binary search to find it.
Based on the theoretical results and the algorithm, the paper introduces Domain AggRegation Network (DARN), which contains a base network for feature extraction, h_y to minimize the task loss and h_d to evaluate the discrepancy between each source domain and target domain. The loss is aggregation with the parameter \alpha.
Finally, the paper conduct experiments on sentimental analysis benchmark, Amazon Review and digit datasets. The paper selects MDAN, DANN, MDMN as the baselines. On the amazon review dataset, the performance of the proposed DARN model is comparable with the MDMN baseline. On the digit dataset, the model can outperform the baselines.
### Novelty ###
The theoretical results in this paper are extended from Cortes et al (2019) and Zhao et al (2018). Thus, the theoretical contribution of this paper is limited.
The algorithm proposed in this paper is interesting. However, the motivation of the proposed method is to minimize the upper bound, not the loss itself, i.e. L_T(h, f_T). Intuitively, when the upper bound of the loss is minimized, it will be beneficial to minimize the loss itself. But it's not guaranteed as the upper bound contains other variables, such as the number of training samples and model complexity. If the training samples and model complexity (think about the parameters in the deep models) are significantly large, the upper bound of the loss might be also very large.
As for the experimental results, the paper only provides results on the sentimental analysis results and digit datasets, which are small benchmarks. The selected baselines are not sufficient. The improvement from the baselines is also limited.
###Clarity###
Overall, the paper is well organized and logically clear. The images are well-presented and well-explained by the captions and the text.
The derivation of the algorithm in Sec 3.2 is logically clear and easy to follow.
###Pros###
1) The paper proposes a new theoretical upper-bound based on the prior works, the upper-bound and its derivation are interesting and heuristic to the domain adaptation research community.
2) The paper is applicable to many practical scenarios since the data from the real-world application is typically collected from multiple sources.
3) The paper is overall well-organized and well-written. The claims of the paper are verified by the experimental results.
###Cons###
1) The critical issue of this paper is that the algorithm is designed to minimize the upper bound. The idea is intuitive when the upper bound is small. However, the proposed upper bound in the paper involves other parameters, such as the model complexity and the number of training samples.
It's an intuitive idea to weight different source domains in multi-source domain adaptation. The paper derives the weight by the Lagrangian form to minimize the upper bound. While another trivial trick is to evaluate \alpha by the domain closeness between each source domain with the target domain.
2) The experimental results provided in this paper are weak. In the abstract and introduction, the paper motivates the multi-source domain adaptation (MSDA) problem by arguing that the MSDA has a lot of real applications. But the paper only provides empirical results on sentimental analysis and digit recognition. Besides, the results on the sentimental analysis are comparable with the compared baselines.
It will be also interesting to see how does the proposed method perform on large-scale datasets such as DomainNet and Office-Home dataset:
DomainNet: Moment Matching for Multi-Source Domain Adaptation, ICCV 2019. http://ai.bu.edu/DomainNet/
Office-Home: Deep Hashing Network for Unsupervised Domain Adaptation, CVPR 2017. http://hemanthdv.org/OfficeHome-Dataset/
3) The novelty of this paper is incremental as the theoretical results are extended from Cortes et al (2019) and Zhao et al (2018).
Based on the summary, cons, and pros, the current rating I am giving now is weak reject. I would like to discuss the final rating with other reviewers, ACs.
To improve the rating, the author should explain the following questions:
1). Why minimizing the upper bound in this scenario would help to minimize the loss on the target domain, when the upper bound can be substantially huge? How about just evaluate \alpha by the closeness of the source domain with the target domain?
2). In the introduction, the paper motivates the multi-source domain adaptation (MSDA) problem by arguing that the MSDA has a lot of real applications. While the experiments are only performed on sentimental analysis and digit recognition. How about evaluating the proposed methods on real image recognition such as DomainNet or Office-Home? |
iclr_2020_HygbQaNYwr | Adversarial training, in which a network is trained on both adversarial and clean examples, is one of the most trusted defense methods against adversarial attacks. However, there are three major practical difficulties in implementing and deploying this method -expensive in terms of extra memory and computation costs; accuracy trade-off between clean and adversarial examples; and lack of diversity of adversarial perturbations. Classical adversarial training uses fixed, precomputed perturbations in adversarial examples (input space). In contrast, we introduce dynamic adversarial perturbations into the parameter space of the network, by adding perturbation biases to the fully connected layers of deep convolutional neural network. During training, using only clean images, the perturbation biases are updated in the Fast Gradient Sign Direction to automatically create and store adversarial perturbations by recycling the gradient information computed. The network learns and adjusts itself automatically to these learned adversarial perturbations. Thus, we can achieve adversarial training with negligible cost compared to requiring a training set of adversarial example images. In addition, if combined with classical adversarial training, our perturbation biases can alleviate accuracy trade-off difficulties, and diversify adversarial perturbations. | This paper proposes perturbation biases as a counter-measure against adversarial perturbations. The perturbation biases are additional bias terms that are trained by a variant of gradient ascent. The method imposes less computational costs compared to most adversarial training algorithms. In their experimental evaluation, the algorithm achieved higher accuracy on both clean and adversarial examples.
This paper should be rejected because the proposed method is not well justified either by theory or practice. Experiments are weak and do not support the effectiveness of the proposed method.
Major comments:
Since the evaluations of defense algorithms are often misleading [1], it requires throughout experiments or theoretical certifications to confirm the effectiveness of defense methods. However, the experiment configuration in this paper is not satisfactory to demonstrate the robustness of defended networks. The followings are a list of concerns.
1) Experiments are limited to small datasets and networks. Since some phenomena only appear in larger datasets [2], there is a concern that the proposed method also works on other datasets.
2) The attack algorithm used for the evaluation is weak. We can confirm this by observing the label leakage [2] in the experimental results. It is hard to judge which defenses are most effective, even within the tested datasets and models.
3) The "adversarial training" baseline used in the experiment is weird. Adversarial training typically generates adversarial examples during the process of the neural networks' optimization instead of using precomputed adversarial examples. Baseline methods should be stronger, for example, adversarial training with PGD [3].
[1] Athalye et al. "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples." ICML 2018
[2] Kurakin et al. "ADVERSARIAL MACHINE LEARNING AT SCALE." ICLR 2017
[3] Madry et al. "Towards Deep Learning Models Resistant to Adversarial Attacks." ICLR 2018 |
iclr_2020_HJx_d34YDB | Predicting the emotional impact of videos using machine learning is a challenging task. Feature extraction, multi-modal fusion and temporal context fusion are crucial stages for predicting valence and arousal values in the emotional impact, but have not been successfully exploited. In this paper, we proposed a comprehensive framework with innovative designs of model structure and multi-modal fusion strategy. We select the most suitable modalities for valence and arousal tasks respectively and each modal feature is extracted using the modality-specific pretrained deep model on large generic dataset. Two-time-scale structures, one for the intra-clip and the other for the inter-clip, are proposed to capture the temporal dependency of video content and emotional states. To combine the complementary information from multiple modalities, an effective and efficient residual-based progressive training strategy is proposed. Each modality is step-wisely combined into the multi-modal model, responsible for completing the missing parts of features. With all those above, our proposed prediction framework achieves better performance with a large margin compared to the state-of-the-art. | This paper tackles the problem of affective impact prediction from multimodal sequences. The authors achieve state-of-the-art performance by using (i) a two-time-scale temporal feature extractor, (ii) progressive training strategy for multi-modal feature fusion, and (iii) pretraining. They divide a long video sequence evenly into several clips. The idea of applying one LSTM for intra-clips (short-time) temporal feature extraction and another LSTM for inter-clips (long-time) temporal feature extraction, resulting in two-time-scale, looks reasonable choice but not novel. The proposed progressive training strategy is mainly used for the modality-specific LSTMs training, which is used during the intra-clip short-time modeling phrase. It seems working well in practice. However, from the explanation, it's not clear why this strategy is good for the *complementary* fusion. Each modality LSTM is trained sequentially with features extracted from other fixed modality LSTMs. Also, the authors seem not explaining why they set the training order for LSTMs at this stage to be the descending order of their performance in the previous stage. In the experiments section, two weak baseline models and several previous state-of-the-art models are compared. However, enough ablation studies on the two-time-scale structure and proposed training strategy are not provided to demonstrate that the proposed method does provide complementary multi-modal feature fusion, which is claimed as a contribution.
Overall, although it achieves state-of-the-art performance on the task, none of the claimed contributions is novel or significant enough. It is a combination of existing ideas (but giving good performance). The proposed training procedure is also weak to be considered as a scientific contribution. |
iclr_2020_SJl1o2NFwS | The Transformer architecture is widely used in natural language processing. Despite its success, the design principle of the Transformer remains elusive. In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system. In particular, how words in a sentence are abstracted into contexts by passing through the layers of the Transformer can be interpreted as approximating multiple particles' movement in the space using the Lie-Trotter splitting scheme and the Euler's method. Given this ODE's perspective, the rich literature of numerical analysis can be brought to guide us in designing effective structures beyond the Transformer. As an example, we propose to replace the Lie-Trotter splitting scheme by the Strang-Marchuk splitting scheme, a scheme that is more commonly used and with much lower local truncation errors. The Strang-Marchuk splitting scheme suggests that the self-attention and position-wise feed-forward network (FFN) sub-layers should not be treated equally. Instead, in each layer, two position-wise FFN sub-layers should be used, and the self-attention sub-layer is placed in between. This leads to a brand new architecture. Such an FFN-attention-FFN layer is "Macaron-like", and thus we call the network with this new architecture the Macaron Net. Through extensive experiments, we show that the Macaron Net is superior to the Transformer on both supervised and unsupervised learning tasks. The reproducible code can be found on http://anonymized | In this work, the authors show that the sequence of self-attention and feed-forward layers within a Transformer can be interpreted as an approximate numerical solution to a set of coupled ODEs. Based on this insight, the authors propose to replace the first-order Lie-Trotter splitting scheme by the more accurate, second-order Strang splitting scheme. They then present experimental results that indicate an improved performance of their Macaron Net compared to the Transformer and argue that this is due to the former being a more accurate numerical solution to the underlying set of ODEs.
The authors highlight an interesting connection between the Transformer architecture and ODEs. In particular, they derive a set of ODEs that is solved numerically by the Transformer and borrow from the body of literature on numerical ODE solvers to improve the architecture. I find that this is a very elegant and promising approach for finding better architectures.
However, I also identified two major and a couple of minor shortcomings of the paper that are explained in detail below. Based on these shortcomings, I recommend rejecting the paper but I would be willing to increase the score if these points were addressed in sufficient detail.
Major points:
1) Replacing the first-order operator splitting scheme by a second-order scheme only guarantees a lower overall truncation error if the split ODEs are solved with sufficiently high accuracy. In particular, the overall accuracy of the numerical solution to the original ODE depends on the accuracy of the operator splitting and the accuracy of the integration scheme used to solve the split ODEs (e.g. Euler’s method). The authors improve the operator splitting, i.e. they make it second-order, but they keep Euler’s method to integrate the individual ODEs. Because of that, they actually do not get rid of the lowest-order error term of the overall scheme and therefore do not obtain a more accurate ODE solver. I think this is a crucial point that invalidates the authors’ claim that the Macaron Net employs a higher-order integration scheme. As far as I am aware, this is not commented on in the paper at all. To address this shortcoming, the authors could replace Euler’s method by a second-order integrator.
2) The experiments considered in this paper are interesting and show competitive performance but, in my opinion, they do not sufficiently support the claim that the Macaron Net yields a more accurate solution to the underlying set of ODEs compared to a Transformer. For a more convincing support of this claim, the authors could consider a toy problem, i.e. a simple set of ODEs with known analytical solution, and actually show that the Macaron Net is more accurate. The accuracy of a numerical ODE solver is commonly assessed by plotting the absolute difference between the exact solution (or a high-resolution numerical approximation to it) and the numerical solution vs the timestep (here \gamma). I suspect that such an analysis would support my previous comment and show that the proposed new architecture is not more accurate ODE solver than the original one.
Minor points and questions:
i) Eqs. (17-19) suggest that you apply two different FFN layers (doubling the number of parameters) instead of applying the same FFN layer twice. You comment on this in Sec. 4.1 when you say ‘..., we set the dimensionality of the inner-layer of the two FFN sub-layers in the Macaron layers to two times of the dimensionality of the hidden states’. It is not clear to me why consistency with Strang splitting requires two different layers rather than applying the same FFN layer twice. Is the reason for having a separate, trainable layer to account for the explicit time dependence of G in Eq. (16)? I think that this is a very important point that should be clarified.
ii) I think that this type of system is usually referred to as ‘dynamical system’ and not ‘dynamic system’. Please check that and, if applicable, update the title.
iii) The authors say that Eq. (5) is a 'convection-diffusion equation’. As far as I am aware, the diffusion equation is a partial differential equation (PDE). Perhaps there is a different notion 'diffusion equation’ in ODE theory. If that’s the case, could the authors please clarify this point to avoid confusion, e.g. by adding a suitable reference in which this type of ODE is classified as a convection-diffusion equation?
iv) In Sec. 2 (2nd paragraph), you cite Vaswani et al. (2017) but in that work the quantity under the square-root in the denominator of Attention(Q, K, V) is actually d_k, the dimension of the key, and not d_model.
v) Figure 1 is a very vague illustration of the connection to ODEs and provides almost no explanation in the caption. I don’t think there is much value in having this figure there.
vi) There are a couple of mistakes in the paper (grammar and expressions) that should be fixed. For example, ‘the Euler’s method’ instead of ‘Euler’s method’, ‘movement in the space’ instead of ‘movement in space’, ‘dynamic system’ instead of ‘dynamical system’, ‘project parameter matrices’ instead of ‘parameter matrices’ or ‘projections’, ‘specially’ instead of ‘specifically’, etc. Please take a look at the relevant sections in the paper and revise them accordingly.
vii) You explain multiple times why the proposed architecture is called a ‘Macaron Net’ (Abstract, Sec 1, Sec. 3). To avoid repetition, I would only explain it once. |
iclr_2020_BJgyn1BFwS | We investigate global adversarial robustness guarantees for machine learning models. Specifically, given a trained model we consider the problem of computing the probability that its prediction at any point sampled from the (unknown) input distribution is susceptible to adversarial attacks. Assuming continuity of the model, we prove measurability for a selection of local robustness properties used in the literature. We then show how concentration inequalities can be employed to compute global robustness with estimation error upper-bounded by , for any > 0 selected a priori. We utilise the methods to provide statistically sound analysis of the robustness/accuracy trade-off for a variety of neural networks architectures and training methods on MNIST, Fashion-MNIST and CIFAR. We empirically observe that robustness and accuracy tend to be negatively correlated for networks trained via stochastic gradient descent and with iterative pruning techniques, while a positive trend is observed between them in Bayesian settings. | This paper seeks to analyze the global robustness of neural networks, a concept defined in the paper. The authors show using concentration inequalities that the empirical local robustness approximates the global robustness. The authors investigate various other issues in the robustness literature, including the robustness/accuracy tradeoff, whether iterative pruning increases robustness, and the robustness of Bayesian networks.
I would vote for rejecting this paper for two key reasons. First, the notion of global robustness is not well-motivated (why do we want to compute this metric? What does it tell us that local robustness does not?). Second, the paper tries to do too many different things, and as a result does not give enough attention to any particular topic.
First, I believe it is up to the authors to motivate their study of global robustness further. While I acknowledge that a few prior works exists along these lines, I do not feel that this work provides much new insight into why global robustness is interesting to examine.
The authors go on to prove results showing that an empirical estimator of the local robustness will converge to the global robustness. The bounds require that the dataset size scales with eps^-2, where eps is the error. This is not terrible but also not great; for example, achieving 1% error requires a dataset size of 10^4 (realistically, even larger datasets would be required to achieve results with high probability).
Next, I would suggest that the authors avoid using the word “guarantees” if they are estimating empirical local robustness in an approximate (rather than exact) manner. Guarantees implies strict results, but the authors use a weak attack (FGSM) to approximate empirical local robustness. The results from FGSM could be far from optimal; the authors could use a stronger attack (e.g. PGD) in addition to changing the wording, or they could find provable guarantees using alternate methods.
Lastly, the authors try to tackle 3 extra questions beyond global robustness toward the end of the paper, and the last two questions are not properly fleshed out.
I like section 4.2, where the authors empirically show that networks that have better hyperparameters (for regular accuracy) tend to be less robust. This is a confirmation of a previously studied phenomena in the literature. Ideally, I would also appreciate it if the authors found the line of best fit to the dataset in addition to the plots provided. I would like a clarification on whether any of these networks were trained to be robust, although it appears that they were all trained normally. I would also like to see plot 2c (for the standard case of robustness of C(x_tilde) = C(x)), except for MNIST and CIFAR10 as well. I feel that the last-layer representation metric the authors analyze (f(x) is close to f(x_tilde)) could be misleading, as robustness on the last layer does not necessarily imply standard adversarial robustness.
Section 4.3 explores iterative pruning, but that seems fairly unrelated to the rest of the paper. Finally, Section 4.4 tries to show the opposite trend for Bayesian Neural Networks, but unfortunately the results for such networks do not yet scale beyond MNIST.
Additional Feedback:
- Why did you use R^emp and D^emp as opposed to just R^emp(g) and R^emp(g_bar)?
- In Figure 1, what is the dataset size |S|?
- In the last sentence of Section 4.3, I didn’t understand what you meant about the relationship between weight pruning and network regularization. Do you mean that weight regularization has no effect on robustness, just like iterative weight pruning? |
iclr_2020_rJxFpp4Fvr | The performance of deep neural networks is often attributed to their automated, taskrelated feature construction. It remains an open question, though, why this leads to solutions with good generalization, even in cases where the number of parameters is larger than the number of samples. Back in the 90s, Hochreiter and Schmidhuber observed that flatness of the loss surface around a local minimum correlates with low generalization error. For several flatness measures, this correlation has been empirically validated. However, it has recently been shown that existing measures of flatness cannot theoretically be related to generalization: if a network uses ReLU activations, the network function can be reparameterized without changing its output in such a way that flatness is changed almost arbitrarily. This paper proposes a natural modification of existing flatness measures that results in invariance to reparameterization. The proposed measures imply a robustness of the network to changes in the input and the hidden layers. Connecting this feature robustness to generalization leads to a generalized definition of the representativeness of data . With this, the generalization error of a model trained on representative data can be bounded by its feature robustness which depends on our novel flatness measure. | This paper describes a connection between flatness of minima and generalization in deep neural networks. The authors define a concept called "feature-robustness" and show that it is related to flatness. This is derived through a straightforward observation that perturbations in feature space can be recast as perturbations of the model in parameter space. This allows the authors to define a (layerwise) flatness measure for minima in deep networks (this layerwise flatness measure is also invariant to rescalings of the layers in neural networks with positively homogenous activations). The authors combine their notion of feature robustness with epsilon representativeness of a function to connect flatness to generalization. They present a few empirical evaluations on CIFAR10 and MNIST.
I believe this paper is able to once again confirm the relationship between flatness and generalization in an empirical manner with their layerwise measure of flatness. I am not so convinced about the theoretical justification that they claim to provide and thus do not recommend acceptance.
Theory - The key theorem relating generalization and flatness is Theorem 10 which says that if a compositional model is feature robust and the output of the first component is an epsilon-representative for the second component, then the compositional model will generalize. While this is interesting, it is not clear to me that this guarantees generalization for deep neural networks. This result only talks about feature robustness and representativeness for a particular layer. If a deep network has many layers, will the feature robustness layers closer to the input guarantee feature robustness at deeper layers? That might require a further unit operator norm constraint on the layer operator, which is a restriction on the types of weights that can be used. If a sample is epsilon representative at one layer, what is required for the next layer to be epsilon representative for the rest of the deep network? This seems to be a missing step in relating flatness/feature robustness of a layer to the generalization of the whole network.
Another idea that I think arises from Theorem 10 is that the flatness of loss landscapes is important when you have learning problems where the hypothesis class is compositional. While flatness is only spoken of in the case of deep neural networks, can we identify the same phenomenon in other problems? I would encourage the authors to try and identify another model in which the flatness-generalization relationship exists (even empirical evidence would suffice for now). This would strengthen the case for studying flatness and biasing optimization towards flatter solutions in the case of deep networks.
Experiments - This section seems to be pretty rudimentary, I would like to see more results on different kinds of network architectures (VGG? Inception? AlexNet?), more datasets (KMNIST? Fashion MNIST? SVHN?), and possibly more repetitions. At one point the authors mention that they declare a minimum has been reached if the training loss is < 0.07. Atleast on CIFAR10 and MNIST it is possible to achieve training loss <1e-4 so am not sure if the networks that the authors are testing are minima at all (It is important for them to be minima since the flatness measure is only defined at minima). Can the authors also identify more situations other than large batch vs small batch training that would lead them to obtain flatter/sharper minima?
The authors also claim that measuring generalization using test error is flawed, but do not provide details about their method of measuring generalization. I would want to see these details and a more thorough discussion of why measuring generalization through test error is flawed.
While this is an interesting paper, I do not believe it is ready for acceptance at ICLR 2020. |
iclr_2020_rygG4AVFvH | Achieving faster execution with shorter compilation time can foster further diversity and innovation in neural networks. However, the current paradigm of executing neural networks either relies on hand-optimized libraries, traditional compilation heuristics, or very recently genetic algorithms and other stochastic methods. These methods suffer from frequent costly hardware measurements rendering them not only too time consuming but also suboptimal. As such, we devise a solution that can learn to quickly adapt to a previously unseen design space for code optimization, both accelerating the search and improving the output performance. This solution dubbed CHAMELEON leverages reinforcement learning whose solution takes fewer steps to converge, and develops an adaptive sampling algorithm that not only focuses on the costly samples (real hardware measurements) on representative points but also uses a domain-knowledge inspired logic to improve the samples itself. Experimentation with real hardware shows that CHAMELEON provides 4.45×speed up in optimization time over AutoTVM, while also improving inference time of the modern deep networks by 5.6%. | This paper proposes an optimizing compiler for DNN's based on adaptive sampling and reinforcement learning, to drive the search of optimal code in order to reduce compilation time as well as potentially improve the efficiency of the code produced. In particular, the paper proposes to use PPO to optimize a code optimization "search" policy, and then use K-mean clustering over a set of different proposed compilation proposals, from which to perform adaptive sampling to reduce compilation time while still keeping a high diversity of the proposed solution pools during exploration. At the same time the authors claim that using RL will learn a better search strategy compared to random search - such as simulated annealing which is used by competing methods - thus producing faster and better solutions.
The paper show results of up to 4x speedup in compilation time (autotuning time) while obtaining a slightly better or similar efficiency of the generated code (in term of execution time). This is a well written extensive research with good results. The authors mention it is (will be) integrated in the open source code of TVM. However I could find no mention in the paper of whether the code will be released with this publication, and I would like to solicit the authors to clarify their code release strategy and timing.
My other question pertains to whether or not compilation time is an key metric to target. It is important to some extent, but I would say that aside from exponential / super-polynomial behaviour of auto-tuning algorithms, a multiple hours / days process to create the best optimized code for a certain network / hardware platform might not be such a big hurdle for a community already used to multiple days / weeks / months to train the same models. I believe that focusing on the efficiency of the optimized code produced would probably be a better metric of success. |
iclr_2020_HyevIJStwH | As deep neural networks (DNNs) achieve tremendous success across many applica-1 tion domains, researchers tried to explore in many aspects on why they generalize 2 well. In this paper, we provide a novel perspective on these issues using the gradient 3 signal to noise ratio (GSNR) of parameters during training process of DNNs. The 4 GSNR of a parameter is defined as the ratio between its gradient's squared mean and 5 variance, over the data distribution. Based on several approximations, we establish 6 a quantitative relationship between model parameters' GSNR and the generaliza-7 tion gap. This relationship indicates that larger GSNR during training process leads 8 to better generalization performance. Moreover, we show that, different from that 9 of shallow models (e.g. logistic regression, support vector machines), the gradient 10 descent optimization dynamics of DNNs naturally produces large GSNR during 11 training, which is probably the key to DNNs' remarkable generalization ability. | In this work, the authors suggest a new point of view on generalization through the lens of the distribution of the per-sample gradients. The authors consider the variance and mean of the per-sample gradients for each parameter of the model and define for each parameter the Gradient Signal to Noise ratio (GSNR). The GSNR of a parameter is the ratio between the mean squared of the gradient per parameter per sample (computed over the samples) and the variance of the gradient per parameter per sample (also computed over the samples). The GSNR is promising as a measure of generalization and the authors provide a nice leading order derivation of the GSNR as a proxy for the measure of the generalization gap in the model. After the derivation, experimental results on MNIST are presented and suggest that empirically there is a relation between the generalization gap of neural network trained by gradient descent and the GSNR quantity given in the paper. Next the author analyze the GNSR of DNNs as opposed with shallow models or other learning techniques and observe that the GSNR differs when using random labels (lower GSNR) as compared with true labels and exhibits different behavior along training for DNNs and gradient descent.
Assumption 2.3.1 is not necessarily the most realistic in the current training paradigm since multi-epoch training indeed separates the training and testing per-sample gradient distributions significantly.
Overall the paper gives a fresh (as far as I know) and nice idea on generalization of neural networks. The derivation requires quite a few stringent assumption (the leading order analysis, on the step size, assumption 2.3.1 on the test and train gradient distributions) the experiments do suggest that the theory is valid to an extent, especially during the early parts of training. In contrast, the presentation of the paper distracts from the work and needs additional cleaning up. Also, the experimental section 2.4 would benefit additional empirical analysis on other datasets and additional experiments, as well as more thorough explanation of the experiments. The writing of the paper needs additional proofreading as currently it is easy to spot typos and grammar errors along the paper. I currently vote weak reject, for solid content and not-quite-ready presentation.
Additional experiments and careful proofreading should definitely enhance the paper and get it to the level of publications, so I am willing to change my decision if the authors improve the writing and the overall presentation. I like the idea presented in the paper and encourage the authors to resubmit a more tidy draft.
smaller presentation issues and typos/grammar issues:
Figure 1: name X and Y labels with more meaningful names instead of RHS,LHS
Figure 3(c): add legend
line 64: consists >>> which consists
line 73: we refer R(z) >>> we refer to R(z)
line 79 futher >>> further overall sentence needs more work
line 92 distribution becomes >>> distributions become
line 100: using common notation summing over n samples, the sum should possibly start from i=1 to n instead of i = 0 to n
line 132 include >>> includes (also possibly rephrase sentence)
line 136 vefity >>> verify
line 137: M number >>> M
line 157 the data points closely distributed >>> the data points are closely distributed
line 162 thorough analytically >>> thorough analytical
-------- Update After Rebuttal --------
I have read the comments the authors made and reviewed the paper's revision.
The presentation has improved a lot (Even though there is still room to go).
Nonetheless at this time I choose to update my score to a weak accept, as I think the authors bring a fresh (as far as I know) and interesting idea with regards to empirical generalization. |
iclr_2020_H1lKNp4Fvr | Stereo matching is one of the important basic tasks in the computer vision field. In recent years, stereo matching algorithms based on deep learning have achieved excellent performance and become the mainstream research direction. Existing algorithms generally use deep convolutional neural networks (DCNNs) to extract more abstract semantic information, but we believe that the detailed information of the spatial structure is more important for stereo matching tasks. Based on this point of view, this paper proposes a shallow feature extraction network with a large receptive field. The network consists of three parts: a primary feature extraction module, an atrous spatial pyramid pooling (ASPP) module and a feature fusion module. The primary feature extraction network contains only three convolution layers. This network utilizes the basic feature extraction ability of the shallow network to extract and retain the detailed information of the spatial structure. In this paper, the dilated convolution and atrous spatial pyramid pooling (ASPP) module are introduced to increase the size of receptive field. In addition, a feature fusion module is designed, which integrates the feature maps with multiscale receptive fields and mutually complements the feature information of different scales. We replaced the feature extraction part of the existing stereo matching algorithms with our shallow feature extraction network, and achieved state-of-the-art performance on the KITTI 2015 dataset. Compared with the reference network, the number of parameters is reduced by 42%, and the matching accuracy is improved by 1.9%. | The paper at hand argues that shallow feature extraction networks should be favored for the computer vision task of stereo matching, rather than the commonly used deep ResNet backbones. To that end, a model is proposed consisting of three convolutional feature extraction layers only. As this makes it impossible to capture global context, dilated convolutions with varying dilations are applied to the extracted features in parallel and concatenated. Finally, a fusion module implemented via channel attention is applied.
I found this paper lacking in terms of contributions. While the motivation for retaining detailed feature information makes sense for the task in question, it is not clear why it requires replacing the ResNet feature extraction with a very shallow network; an alternative would be to add skip-connections originating from lower levels. Both dilated convolutions and the fusion module are not novel. The paper addresses the need for a fusion module in great detail. However, the insight that a large dilation on top of features computed from a limited receptive field will result in a non-continuous receptive field is a rather trivial one. Hence I don't see the need for section 4.3.1 and 4.3.2, as well as the somewhat complicated deduction in appendix A. The individual modules are ablated, but what about the number of layers for feature extraction? Why settle for 3 rather than 1, 2, or 4?
The paper does compare against other methods for stereo matching, but I was wondering why the KTTI leader-board excerpt does not include the better results from http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo ? The result tables are also hard to parse as bold numbers do not correspond to best performance. The conclusion claims that the proposed shallow architecture exhibits "lower training difficulty" but I did not see any support for this in the experiments.
Overall, I think that this paper should be rejected on the basis of insufficient contributions. |
iclr_2020_BJxYUaVtPB | We explore the match prediction problem where one seeks to estimate the likelihood of a group of M items preferred over another, based on partial group comparison data. Challenges arise in practice. As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios. Worse yet, we have no prior knowledge on the underlying model for a given scenario. These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances. To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common. This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand. Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms. It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets. Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well. | This paper provides a technique to solve match prediction problem -- the problem of estimating likelihood of preference between a pair of M-sized sets. The paper replaces the previously proposed conventional statistical models with a deep learning architecture and achieve superior performance than some of the baselines. The experiments show the efficacy of the proposed methods.
I have some major concerns with this paper. These are described below:
1. The paper presentation is not very clear. The paper contains many imprecise statements. The problem is not setup very well.
(a) The abstract and the introduction contains vague statements with no proper description of what the technique is about.
(b) In introduction, the authors start with discussing 1-sized pairwise comparisons, and then suddenly are discussing in-group items effects at the end of the third paragraph, so that means they are talking about M-sized comparisons with M > 1. This adds to the confusion.
(c) The same things holds true for the first paragraph in "Main Contribution". Since the authors are using deep learning frameworks, this does not mean the authors "can infer the underlying models accurately." I am not sure what the authors wanted to convey with this statement.
(d) The reason I am being very stringent with impreciseness of the statements is majorly after reading the first statement in the Motivation section. It says, "Our decision to incorporate two modules R and P into our architecture has been inspired by SOME state-of-the-art algorithms developed under CERTAIN statistical models, which have been shown
therein to be OPTIMAL." Please avoid using some, certain, optimal when they are not defined properly in the paper yet.
2. The paper mentions multiple times that it does not use statistical models tailored for a dataset or application but instead use deep neural networks. In my opinion, NN is just another form of statistical model that captures statistical patterns of comparisons and in-group interaction. They are not following the same modeling assumptions as others but have created their own in some sense. The authors may want to rephrase those statements.
3. The evaluation metric for the datasets that author consider is the prediction accuracy. I am not sure why the authors evaluate on cross entropy as well in Table 1, since both are closely related (CE is a consistent surrogate of accuracy). Can you please explain? However, I liked that they compared the methods with other metrics in Table 2.
4. In problem setup, please mention M > 1. For M = 1, the problem is similar to [1] and many solutions have been proposed for that.
5. I am not sure how the R and P modules capture what the authors want them to capture. I would suggest the authors to include that when they first discuss R and P modules.
6. In equations 7, it is not clear how the function R(.,.,.,) and P(.,.,.) defined? Is it similar to what is described in equation 5?
7. The training procedure contains the standard details; however, it is not clear to me how the modules R, P, and G interact during training.
Overall, I believe the paper has a descent idea and contains satisfactory experimental results; however, the presentation of the paper is very weak at this moment.
[1] Joachims, Thorsten. "Optimizing search engines using clickthrough data." Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2002.
---- After Rebuttal ---
I thank the authors for providing response to my questions and making edits to the paper. My clarity on boxes R and G has become better; however, I am still not totally convinced. Also, the presentation of the paper could be further improved. Therefore, I would keep the same score. |
iclr_2020_Bke13pVKPS | As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML. | The authors present a framework to perform meta-learning on the loss used for
training. They introduce the Baikal loss, obtained using the MNIST dataset, and
BaikalCMA where the coefficients have been tuned. The evaluation of these loss
functions is performed on the MNIST and CIFAR-10, and according to the results
they converge faster, towards lower test error and need fewer samples to obtain
results similar to the cross-entropy loss.
The claims are clearly stated and the framework is detailed, the experiments
cover all the potential benefits of the Baikal loss. However it seems that some
potentially critical points have been omitted. The cross-entropy loss is well
known to be beneficial in dataset with severe class imbalance. The two datasets
used for evaluation are perfectly balanced, it might beneficial to see how it
performs in the unbalanced case.
I have a couple of concerns about the method. First about step "(1) loss
function discovery": The initial population starts with trees of depth at most
2, and the final solution(Baikal) has either 2 or 3 (depending on which
definition of depth is chosen). It is unclear that the genetic optimization is
superior to simply choosing random loss functions. I think it would be relevant
to add a figure that shows how the fitness of the leader of each generation
evolves over time.
The second step "(2) coefficient optimization", while objectively generating a
loss function that was superior on the metrics evaluated, raised some
questions. In equation (2) the factor "1.5352" seems to be equivalent to adding
a constant to the loss, which should not impact optimization. Also the factor
"2.7279" seems to be equivalent to a change in learning rate. This may be an
indication that the learning rate search was not done thoroughly. It would be
beneficial to clarify when it is happening: a) For each individual of the
population during step (1), b) before performing CMA, c) after CMA. Also: Was
learning rate search was performed on the network trained with Cross-Entropy? It
was not entirely clear from the experiment details in Appendix A.2.1.
About the Baikal loss itself, I fear that it could produce models that have very
poor calibration, it might be nice to evaluate that (even if it is only in
the appendix).
While the paper does a great job at presenting the problem and its applications
and propose a framework that generated a loss that can transfer to other
datasets without any tuning required. I think it lacks a more thorough
evaluation and description of the dynamics observed during the genetic
evolution, and the performance of the Baikal loss on other datasets (my quick
experients with it on ImageNet diverged I did not have the time necessary to
tune the hyper-parameters).
Minor remarks:
There might be a slight omission in section 3.1: according to Figure 1, exp(x)
is one of the potential unary operators explored by the GLO framework. However
it is not present it the list of operators. Could you clarify this?
To the best of my knowledge, in the machine learning literature, it seems that
the letter x is used to denote the prediction and y for the ground truth. The
fact that this paper used the opposite convention confused me the first time I
read it. |
iclr_2020_HkxdQkSYDB | Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, where agents keep moving and their neighbors change quickly. This makes it hard to learn abstract representations of mutual interplay between agents. To tackle these difficulties, we propose graph convolutional reinforcement learning, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and cooperation is further improved by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios. | The paper addresses the problem of coordination in the multi-agent reinforcement learning setting. It proposes the value function factorization similar to independent Q-learning conditioning on the output of the graph convolutional neural network, where the graph topology is based on the agents’ nearest neighbours. The paper is interesting and has some great ideas, for example, KL term to ensure temporal cooperation consistency. However, the paper has its drawbacks and I feel obliged to point them out below. I vote for the weak acceptance of this paper.
One of the main drawbacks of the paper is that it is extremely hard to grasp. Even the Abstract and Introduction are hard to understand without having a pass over the whole paper. The authors often use vague terms such as 'highly dynamic environments' or 'dynamics of the graph' which make it hard to understand what they mean. The paper would benefit from a more precise language. Some of the important notions of the paper are used before they are introduced, which make the general picture very hard to understand and to relate the work to the existing research.
'Related Work' section seems to be missing some recent work applying graph neural networks to multi-agent learning settings:
• Malysheva, Aleksandra, Tegg Taekyong Sung, Chae-Bong Sohn, Daniel Kudenko, and Aleksei Shpilman. "Deep Multi-Agent Reinforcement Learning with Relevance Graphs." arXiv preprint arXiv:1811.12557 (2018).
• Agarwal, Akshat, Sumit Kumar, and Katia Sycara. "Learning Transferable Cooperative Behavior in Multi-Agent Teams." arXiv preprint arXiv:1906.01202 (2019).
My questions to the authors:
• In section 3 you mention 'a set of neighbours ..., which is determined by distance or other metrics'. Can you elaborate on that? What are these metrics in your case?
• Just before the Section 3.1, you say 'In addition, in many multi-agent environments, it may be costly and less helpful to take all other agents into consideration.' Have you run any experiments on that? In the appendix, you show, that making the neighbourhood smaller negatively affects the performance, but what if you make it bigger? Ideally, I would like to see an extended version of Figure 8 and 9 in the main part of the paper since they are very interesting and important for the claims the paper makes.
• At the end of Section 3.1, you mention the soft update of the target network. Later, in 3.3, you say that that you do not use the target network. Can you elaborate more on that?
• In Equation 4, is it a separate KL for each of the attention heads? If yes, this is not clear from the formula.
• It will be useful to see the ablation experiments for all of the testbeds, not only for Battle.
• Why do you think the DQN performance drops in the second half of the training in Figure 4 for all of the runs?
• Have you tried summation instead of the mean aggregation step?
I will put comments for particular parts of the paper below.
ABSTRACT
>>> ...environments are highly dynamic
What do you mean precisely here?
>>> ...graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment
What is the 'dynamics of the underlying graph'? What is the 'graph of the multi-agent environment'?
>>> 'coordination is further boosted'
Not sure that 'boosted' is the right word here.
INTRODUCTION
>>> '...where mutual interplay between humans is abstracted by their relations'
Not sure what it means.
>>> we consider the underlying graph of agents...
The agent graph has not been introduced yet.
>>> DGN shares weights among all agent(s) making it easy to scale
What do you mean precisely by 'easy to scale'? Can you support this claim?
>>> We empirically show the learning effectiveness of DGN in jungle
Needs a reference to the testbed.
>>> ... interplay between agents and abstract relation representation
What is 'abstract relation representation?
>>> We consider partially observable environments.
What do you mean precisely by that? What is the MDP formalism most suitable for your problem statement? What is objective under your formalism?
>>> However, more convolutional layers will not increase the local region of node i.
What do you mean by that?
>>> As the number and position of agents vary over time, the underlying graph continuously changes, which brings difficulties to graph convolution.
What kind of difficulties?
>>> As the action of agent can change the graph at next timestep which makes it hard to learn Q function.
Why does it make it hard?
>>> DGN can also be seen as a factorization of a centralized policy that outputs actions for all the agents to optimize the average expected return.
It would be useful for the reader to compare your approach with all the others type of the value function factorization. To me, your approach looks like a more sophisticated version of independent Q-learning, is that true?
Minor comments:
* In 3.2 it would be very helpful to put the dimensions for all of the variables for easier understanding.
* The brackets in the equation 3 are not very clear (what are you concatenating across?)
* In section 4, when describing an environment you say 'local observation that contains a square view with 11x11 grids'. What is the total size of the environment?
* The performance plots for Battle include ablations before the ablation subsection is introduced. This is a bit confusing.
* All figures/tables captions should be more detailed and descriptive.
* ‘However, causal influence is not directly related to the reward of environment.’ Should be ‘of the environment’. |
iclr_2020_HkgsPhNYPS | Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures. | --- Overall ---
This paper proposes an algorithm for learning from data with noisy labels which alternates between updating the model and removing samples that look like they have noisy labels, thereby allowing the training procedure to focus on clean samples. Overall, I found the paper very well-written, the proposed approach reasonable, and the experiments convincing. I have some questions about what assumptions are required for such a procedure to work, but in general, I think this is a strong paper.
--- Major comments ---
1. I found it somewhat unclear how large the methodological contribution was. In particular, has the approach of filtering out samples based on disagreement with predictions from the model been tried before (i.e. the primary contribution is self-ensembling)? When the proposed method includes multiple pieces (Mean teacher + iteratively creating a filtered dataset + self-ensembling), I recommend being *very* explicit about which parts are new contributions. With that said, I greatly appreciated the ablation experiments which really highlight the importance of each piece.
2. I don't feel like I have a good sense for what assumptions need to be satisfied for the proposed method to work. For example, in section 2.2 the authors say "If the noise is sufficiently random, the set of correct labels will be representative to achieve high model performance". What is meant by "sufficiently random" here? Is there a formal version of this assumption? Do any independence or positivity assumptions need to be satisfied? Most importantly, what happens when the label noise does not look like what you expect? I would love to see some experiments examining the failure modes of the algorithm. For example, what happens when label errors are concentrated in a particular region of the feature space (or just generally depend on the features)? In this case, even if the filtering procedure work perfectly, the filtered dataset will have a different feature distribution than the data distribution leading to potential covariate shift problems. If I understood the experiments correctly, the method was only tested on label-depended noise models.
3. Along the same lines: what are the necessary conditions to guarantee that this procedure converges? While the authors suggest that self-ensembling prevents samples from oscillating in and out of training set, is this a guarantee or an empirical observation? More broadly, it is not totally clear what the filtering does to the objective function or whether this procedure is even formally optimizing a well specified objective function (potentially some temperature limit of a soft-weighted objective?).
--- Minor comments ---
1. Figures 1 and 4 are not readable in black and grey-scale.
2. I would front-load the justification for using self-ensembling. In particular, I think the two sentences starting with "When learning under label noise,..." on page 2 could be moved much earlier.
3. I'll be interested to see what the other reviewers say, but I found Figure 2 hard to follow.
4. The formatting of Section 4.2.4 makes it a bit hard to figure out where the text starts (as opposed to the table captions). |
iclr_2020_rylqmxBKvH | We tackle the problem of inpainting occluded area in spatiotemporal sequences, such as cloud occluded satellite observations, in an unsupervised manner. We place ourselves in the setting where there is neither access to paired nor unpaired training data. We consider several cases in which the underlying information of the observed sequence in certain areas is lost through an observation operator. In this case, the only available information is provided by the observation of the sequence, the nature of the measurement process and its associated statistics. We propose an unsupervised-learning framework to retrieve the most probable sequence using a generative adversarial network. We demonstrate the capacity of our model to exhibit strong reconstruction capacity on several video datasets such as satellite sequences or natural videos. | Summary
The paper presents an approach to perform video inpainting from corrupted input only . Their approach proposed a method that uses GANs for denoising images to now handle full sequences of images by building on top work that handles inpainting in single images.
Strengths
1) The authors present extensive experiments on many datasets.
2) The presented approach is simple and general enough to be applied to many video problems.
3) The provided code is well structured and easy to read and atrached webpage showcases their results well.
4) The paper is well-written.
Weakness
1) From the code and the description in paper, it seems that a different corruption (https://github.com/anon-ustdi/ustdi/blob/7a81db4972ef9d4eabbd8fe354a8984a7771ae5d/src/datasets/corrupted.py) is applied for each step. Can the authors confirm if that is the case?
If this is the case, I am afraid the method cannot be called unsupervised as across many steps the model would have seen different corruptions of the same video and across many such corruptions the model can learn what an uncorrupted video looks like. It is okay if that is the case but the claim of unsupervised would not hold. If that is the case, the authors need to make comparison with other supervised inpainting methods as well [1,2]
2) Is there a dataset for which these corruptions exist naturally? May be [3, 4]. It would be nice to have experiments on a dataset where the corruptions are present naturally.
3) While the experiments are incomplete for Table 1, [4] outperforms the proposed approach for the one dataset the numbers have been reported. Performance comparison with [4] are not fair as [4]'s implementation is in MATLAB.
Decision
While the presented approach is good, further experiments are required to further validate the effectiveness of their approach in an unsupervised setting.
References
[1] Chuan Wang, Haibin Huang, Xiaoguang Han, and Jue Wang. Video inpainting by jointly learning temporal structure and spatial details.
[2] Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Deep video inpainting.
[3] https://github.com/stayhungry1/Video-Rain-Removal
[4] https://github.com/nnUyi/DerainZoo
[5] Alasdair Newson, Andrés Almansa, Matthieu Fradet, Yann Gousseau, and Patrick Pérez. Video inpainting of complex scenes. |
iclr_2020_HJg2b0VYDr | Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets, but they can be prohibitively expensive to apply in deep learning. Unlike in other areas of machine learning, the feature representations that these techniques depend on are learned in deep learning rather than given, requiring substantial training times. In this work, we show that we can greatly improve the computational efficiency of data selection in deep learning by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, or training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signal for data selection. We evaluate this "selection via proxy" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error. For core-set selection, proxies that are over 10× faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, making end-to-end training time savings possible. | What is the paper about ?
*The paper proposes a simple method to speed up active learning/core-set selection in deep learning framework. The idea is that instead of using full scale deep model for data selection/uncertainty sampling, use a smaller/faster proxy model and this proxy model’s uncertainty estimates are good enough for choosing data points even though proxy model’s accuracy may not be good.
*The paper experiments with 5 datasets with 2 active learning and 3 core set selection strategies and shows promising speed-up with little to no regression in performance in most settings.
What I like about this paper ?
*The idea is simple and the paper is well written and easy to follow with ample references.
*Unlike a lot of active learning papers, this paper considers both image and text domains.
*Results in both Table 1 and Figure 2 have been reported after multiple runs 3 and 5 respectively. Authors also report standard deviation which is important because the results can vary a lot from run to run in active learning.
*Plenty of empirical results do make the paper attractive and this definitely makes for a nice contribution to the active learning community.
What needs improvement ?
*Novelty is lacking. While the authors may be the first ones to apply this idea in the context of deep learning, they themselves note that the idea of using a smaller proxy like naive bayes for larger models like decision trees is not new. Although I agree this should not be a criteria for rejection especially given the extensive experimentation performed in the paper and has not impacted my score.
*The choice of datasets is not justified in the paper and could have been better. I don’t see the point of showing that method works well for both Amazon Review Polarity and Amazon Review Full. Same can be said for CIFAR10 and CIFAR100. Would have appreciated a more diverse set of datasets. I would have been happy to see results on WMT given data selection is an open challenge in machine translation.
*Paper initially proposes two methods for proxy: one by scaling down the model and two by reducing the number of epochs. fastext falls under neither of those categories and the giant speed up claims made by the paper like 41.7x are only with fastext. It is also important to note that while the performance drop using the fastext or smaller version of VDCNN is not much as compare to VDCNN, the performance gain is also not a lot compared to Random. Due to this reason I am not convinced if I should use SVP if I want speedup or just randomly select points especially in text domain.
*Would have liked to see the comparison of #parameters in your different proxy models instead of just names like VDCNN29 and VDCNN9.
*A lot of experimental details are missing which would make it difficult to reproduce the results unless the code is released.
*Why fasttext worked as a proxy for VDCNN and AlexNet did not work with ResNet is unclear from the paper.
Questions for Authors ?
*Do you plan to release the full code that can reproduce the results in the paper ? Based on my experience, I have found that a lot of active learning results are hard to reproduce and therefore it is difficult to draw any conclusions based on them.
*How sensitive are the results to hyper-parameter tuning and small things like choice of pre-trained word embeddings etc ? How did you select hyper-parameters at each stage during active learning ? |
iclr_2020_HklFUlBKPB | The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs. Here, we show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. ReLU networks are piecewise linear and the boundaries between pieces correspond to inputs for which one of the ReLUs switches between inactive and active states. Thus, first-layer ReLUs can be identified (up to sign and scaling) based on the orientation of their associated hyperplanes. Later-layer ReLU boundaries bend when they cross earlier-layer boundaries and the extent of bending reveals the weights between them. Our algorithm uses this to identify the units in the network and weights connecting them (up to isomorphism). The fact that considerable parts of deep networks can be identified from their outputs has implications for security, neuroscience, and our understanding of neural networks. | This paper introduces an approach to recover weights of ReLU neural networks by querying the network with specifically constructed inputs. The authors notice that the decision regions of such networks are piece-wise linear corresponding to activations of individual neurons. This allows to identify hyperplanes that constitute the decision boundary and find intersection points of the decision boundaries corresponding to neurons at different layers of the network. However, weights can be recovered only up to permutations of neurons in each layer and up to a constant scaling factor for each layer.
The algorithm consists of two parts: Identifying parameters of the first layer and subsequent layers. First they sample a lot of line segments and find their intersections with the decision boundary, i.e. where pre-activations equal 0. Then they sample some points around the intersections and estimate the hyperplanes up to a scaling factor and a sign. For the first layer they check whether some of the hyperplanes belong to it. For the consecutive layers they proceed by moving from the intersection points along the hyperplanes until the decision boundary bends. Again, by identifying which of the bends correspond to the intersection of the current layer's hyperplanes with the previous layers' ones, they are able to recover parameters of the current layer by computing the angles of these intersections.
This paper tackles a very interesting and important problem that might have huge implications for security and many other aspects. However, I'm leaning towards Reject for the following reasons:
1. The algorithm's description is either incomplete or unclear. There are such core functions as PointsOnLine and TestHyperplane, whose pseudo-code would be very helpful for understanding. For example, the authors say that PointsOnLine performs a "binary search," but then this function can find only one (arbitrary) intersection of a line segment with a decision boundary, while each sampled line can intersect multiple ones. If it is not binary search, then the asymptotic analysis given in the end of Sec. 4.2 is incorrect. Even more mysterious is TestHyperplane, from the provided intuition I do not understand how it is possible to distinguish hyperplanes corresponding to the first layer vs. the other layers. In Sec. 4.3, second paragraph, the choice of R is unclear. How to chose it to make sure that the closest boundary intersects it?
The authors consider a very limited setting of only fully-connected (linear) layers with ReLU activations. In this case it is easy to see that the resulting decision boundary is indeed piece-wise linear with a lot of bending. Authors themselves notice, that "the algorithm does not account for weight sharing." For CNN this will lead to each virtual neuron in each channel to have its own kernel weights, although there must be one kernel per channel. Also the authors admit, that pooling layers affect partitioning of the activation regions, making the proposed approach inapplicable. The authors did not discuss whether the proposed approach can handle batchnorm layers. Such non-linear transformations could pose serious problems. All this rules out applications to, for example, all CNN-based architectures, that prevail in computer vision. The authors mention, that their "algorithm holds with slight modification" for ResNets, but as mentioned earlier convolutional, pooling and batchnorm layers make it not so trivial (if at all possible).
2. Experimental evaluation is extremely limited: It is all contained in just one paragraph. Although it is mentioned that "it is often possible to recover parameters of deep ReLU networks," they evaluated their approach on very shallow and narrow networks (only 2 layers, 10 to 50 neurons in each). The immediate question here is why this algorithm is not applied to sufficiently deep NN? At least a network that could classify MNIST reasonably well. Actually, this would be a better proof-of-concept: Given a pre-trained MNIST classifier, apply the proposed method, recover the weights and check if you get the same output as from the original network. Whereas here the evaluation is given as a normalized relative error of the estimated vs. the true weights. Which raises the question of how the scaling factor was chosen? Recall, that the proposed method estimates network's parameters only up to an arbitrary scaling factor. My guess, is that for the Figures 3 and 4 (both right) the estimated weights were re-scaled optimally to minimize the relative error. But in the end, one is interested in recovering the original weights of the network, not relative ones.
I am very confused by Fig. 3 left: Why is the number of queries going down as the number of neurons increases? Should it not be that with more neurons the ambiguity also increases, requiring more queries? Again, this analysis is very limited, it would be very interesting to see, how many more queries one needs for deeper layers of the network. But for this experiments with deeper than 2 layers networks are necessary.
3. The choice of parameters is unclear and not discussed. How long should the line segments be, how many of them. How many points are sampled and within which radius to identify hyperplanes, how to choose 'R'. And how all these choices affect accuracy and performance.
Overall, the paper looks rather incomplete to me and requires a major revision. It will definitely benefit if the "slight modification" for the case of ResNets is included. Also, experimental evaluation should be completely re-done and extended. |
iclr_2020_Hkg-xgrYvH | We propose a meta-learning approach that learns from multiple tasks in a transductive setting, by leveraging unlabeled information in the query set to learn a more powerful meta-model. To develop our framework we revisit the empirical Bayes formulation for multi-task learning. The evidence lower bound of the marginal log-likelihood of empirical Bayes decomposes as a sum of local KL divergences between the variational posterior and the true posterior of each task. We derive a novel amortized variational inference that couples all the variational posteriors into a meta-model, which consists of a synthetic gradient network and an initialization network. The combination of local KL divergences and synthetic gradient network allows for backpropagating information from unlabeled data, thereby enabling transduction. Our results on the Mini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification outperform previous state-of-the-art methods. | The authors argue for the importance of transduction in few-shot learning approaches, and augment the empirical Bayes objective established in previous work (estimating the hyperpior $\psi$ in a frequentist fashion) so as to take advantage of the unlabelled test data.
Since the label is, by definition, unknown, a synthetic gradient is instead learned to parameterize the gradient of the variational posterior and tailor it for the transductive setting. The authors provide an analysis of the generalization ability of EB and term their method _synthetic information bottleneck_ (SIB) owing to parallels between its objective of that of the information bottleneck. SIB is tested on two standard few-shot image benchmarks in CIFAR-FS and MiniImageNet, exhibiting impressive performance and outperforming, in some cases by some margin, the baseline methods, in the 1- and 5-shot settings alike, in addition to a synthetic dataset.
The paper is technically sound and, for the most part, well-written, with the authors' motivations and explanation of the method conceived quite straightforwardly. The basic premise of using an estimated gradient to fashion an inductive few-shot learning algorithm into a transductive one is a natural and evidently potent one. The paper does, however, at times feel to be disjointed and, to an extent, lacking in focus. The respective advantages of EB and the repurposing of synthetic gradients to enable the transductive approach are clear to me, yet while they might indeed be obviously complementary, what is not obvious is the necessity of the pairing: it seems there is nothing prohibiting the substitution of the gradient for a learned surrogated just as well under a deterministic meta-initialization framework. As such, despite sporting impressive results on the image datasets, I am not convinced about how truly novel the method is when viewed as a whole.
On a similar note, while the theoretical analysis provided in section 4 was not unappreciated, and indeed it was interesting to see such a connection between EB with information theory rigorously made, it does feel a little out of place within the main text, especially since it is not specific to the transductive setting considered, nor even to the meta-learning setting more broadly. Rather, more experiments, per Appendix C, highlighting the importance of transduction and therein the synthetic gradients and its formulation would be welcome. Indeed, it is stated that an additional loss for training the synthetic gradient network to mimic the true gradient is unnecessary; while I agree with this conclusion, I likewise do not think it would hurt to explore use of the more explicit formulation.
Considering the authors argue specifically for the importance of transduction in the zero-shot learning regime, I think it would be reasonable to expect experiments substantiating this, and the strength of their method in this regard, on non-synthetic datasets. As far as the toy problem is concerned, I am slightly confused as to the choice of baseline, both in the regard to its training procedure and as to why this was deemed more suitable than one purposed for few-shot learning, so that we might go beyond simple verification to getting some initial sense for the performance of SIB. Moreover, it is not clear from the description as to how $\lambda$ is implemented here. As it stands, Section 5, for me, offers little in the way of valuable insights. The experiments section on the whole, results aside, feels somewhat rushed; the synthetic gradients being a potential limiting factor for instance feels "tacked on" and seems to warrant more than just a passing comment.
Minor errors
- Page 7: the "to" in "let $p_\psi(w)$ to be a Gaussian" is extraneous
- Page 8: "split" not "splitted".
- Further down on the same page, "scale" in "scale each feature dimension" should be singular and Drop" is misspelled as "dropp".
- Page 9: "We report results **with using** learning rate..."
- _Equation 17_ includes the indicator function $k \neq i$ but $i$ is not defined within the context.
EDIT: changed score |
iclr_2020_Skgvy64tvr | We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks. Instead of using popular activation functions (such as ReLU), we advocate the use of k-Winners-Take-All (k-WTA) activation, a C 0 discontinuous function that purposely invalidates the neural network model's gradient at densely distributed input data points. The proposed k-WTA activation can be readily used in nearly all existing networks and training methods with no significant overhead. Our proposal is theoretically rationalized. We analyze why the discontinuities in k-WTA networks can largely prevent gradient-based search of adversarial examples and why they at the same time remain innocuous to the network training. This understanding is also empirically backed. We test k-WTA activation on various network structures optimized by a training method, be it adversarial training or not. In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks. | This paper addresses the important question of improving the robustness of deep neural networks against adversarial attacks. The authors propose a surprisingly simple measure to improve adversarial robustness, namely replacing typical activation functions such as ReLU with a k-winners-take-all (k-WTA) functions, whereby the k largest values of the input vector are copied, and all other elements of the output vector are set to zero. Since the size of input and output maps varies drastically within networks, the authors instead use a sparsity ratio \gamma that calculates k as a function of input size. k-WTA networks can be trained without special treatment, but for low \gamma values the authors propose a training schedule, whereby \gamma is slowly reduced, then re-training takes place, until the desired value of \gamma is reached. The presented effect is backed up by extensive theoretical investigations that relate the increased robustness to the dense introduction of discontinuities, which makes gradient-based adversarial attacks harder. A small change in an input signal can change the identity of the "winning" inputs, and thus in a sub-sequent matrix multiplication make use of other rows or columns, thus allowing arbitrarily large effects due to small input variations. Empirical evaluations in CIFAR and SVHN for a variety of attacks and defense mechanisms demonstrate the desired effects, and illustrate the loss landscapes due to using k-WTA.
I think the paper is a valuable and novel contribution to an important topic, and is definitely suitable for publication at ICLR. In principle there is just one novel idea, namely using k-WTA activations to improve adversarial robustness, but this claim is investigated thoroughly, in theory, and demonstrated convincingly in experiments. The paper is well written and tries to address all potential questions one might have surrounding the basic idea. There is code available, and the idea should be simple to implement in practice, so I would expect this paper to have large impact on the study of adversarial robustness.
I appreciate the thorough proofs of the claims in section C of the appendix, but I did not review all proofs in detail.
Potential weaknesses that should be addressed:
1. Section 4: I would propose to fully define A_rob here or at least provide a reference.
2. From Table 1 it seems that using k-WTA leads to a quite noticeable drop in standard accuracy, especially for sparse \gamma, which leads to the best adversarial robustness. Can you please comment on whether the full ReLU accuracy in the natural case can always be recovered by k-WTA networks, e.g. with larger \gamma?
3. Since small changes have a big effect in k-WTA, it should be investigated how robust the k-WTA networks are with respect to more natural perturbations, e.g. noisy input, blurring, translations, rotations, occlusions, etc. as introduced in (Hendrycks and Dietterich, 2019). It would be critical if such perturbations have a stronger effect on k-WTA.
4. Is there any intuition about whether k-WTA should be used everywhere in a deep network, or whether it makes sense to mix k-WTA and ReLU functions?
Minor comments:
5. WTA networks are very popular in computational neuroscience and are even hypothesized to represent canonical microcircuit functions (see e.g. Douglas, Martin, Whitteridge, 1989, Neural Computation, and many follow-up articles). You cite the work of Maass, 2000a,b already, it could be interesting to link your work to other papers in that field who motivate WTA from a biological perspective. |
iclr_2020_Byg5ZANtvH | Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure. Variants of this problem arise in applications such as image deblurring, microscopy, neural spike sorting, and more. The problem is challenging in both theory and practice, as natural optimization formulations are nonconvex. Moreover, practical deconvolution problems involve smooth motifs (kernels) whose spectra decay rapidly, resulting in poor conditioning and numerical challenges. This paper is motivated by recent theoretical advances (Zhang et al., 2017;Kuo et al., 2019), which characterize the optimization landscape of a particular nonconvex formulation of SaSD and give a provable algorithm which exactly solves certain non-practical instances of the SaSD problem. We leverage the key ideas from this theory (sphere constraints, datadriven initialization) to develop a practical algorithm, which performs well on data arising from a range of application areas. We highlight key additional challenges posed by the ill-conditioning of real SaSD problems, and suggest heuristics (acceleration, continuation, reweighting) to mitigate them. Experiments demonstrate the performance and generality of the proposed method. | This paper analyzes some of the optimization challenges presented by a particular formulation of "short-and-sparse deconvolution" and proposes a new general purpose algorithm to try to alleviate them. The paper is reasonably clearly written and the experimental results seem impressive (as a non-specialist in this area). However the experimental investigation on real data does not compare to a baseline, which seems like an essential part of investigating the proposed method. If this were corrected I would recommend an accept.
I would like to make it clear that I have little experience in this area and am not familiar with relevant previous literature, so I was only able to roughly judge novelty of the ideas and how the experimental results might compare with existing approaches.
Major comments:
For the experimental investigations on real data there was no baseline presented. Are there no task-specific algorithms that have previously been developed for deconvolution of calcium signals, fluoresence microscopy and calcium imaging? At the very least it would be helpful to compare to one of the other general purpose methods used for figure 6.
It seemed like a lot of the paper is taken up with a recap of the results presented in Kuo et al. (2019), and it didn't always seem clear exactly what the relevance of these results were to the present paper. In particular it seemed strange to me to devote so much space in section 2 to the ABL objective given that it is not used (as far as I can tell) for the proposed algorithm.
Minor comments:
The abstract says "We leverage... sphere constraints, data-driven initialization". Is the effect of these actually investigated experimentally?
In the abstract, "This is used to derive a provable algorithm" makes it sounds to me like this will be done in the current paper.
In the abstract, a reference for the "due to the spectral decay of the kernel a_0" claim would be helpful.
In the notation section, the definition of the Riemannian gradient seems a little sloppy mathematically: strictly f needs to be defined on $\reals^p$ (or an open subset of $\reals^p$ containing $S^{p - 1}$) in order for the right side of the gradient expression to be defined.
At the start of section 2, it would be helpful to give a one-sentence description of the unifying theme of the section.
It might be helpful to explicitly state that "coherence" is related to the strength of temporal / spatial correlations to aid developing the reader's intuition for the meaning of $\mu(a)$.
Throughout the paper the problems that multiple equivalent local optima (due to shift symmetry) cause come up repeatedly. What happens if this symmetry is removed straightforwardly by adding a term to the cost function to encourage a particular value of $a$ to be the largest?
In section 2.1, for "It analyzes an ABL...", it's not completely clear what "It" refers to (I presume Kuo et. al (2019) from context).
Marginalization in my experience refers to a "partial summing" operation, whereas just above (4) it is used to refer to a "partial maximization" operation. This seems non-standard to me, but is this usage standard in this field?
I didn't understand the relevance of the sentence "Under its marginalization... smaller dimension p << m." to the present paper. The sentence also seemed vague and difficult to understand if you were not already familiar with this result. It should also have a reference to justify this claim.
It wasn't clear to me whether figure 1 were schematic "rough intuition" diagrams intended merely to be suggestive, or provably showing the type of behavior that happens in all cases. Also, what are the axes, generic "parameter space", I guess? I also didn't follow why (a) appears projected on to a plane while (b) and (c) appear projected on to a sphere(?)
In section 3, under "momentum acceleration", it would be helpful to justify the claim that "In shift-coherent settings, the Hessian... ill-conditioned...".
In figure 4, the axis labels are much too small to read. In figure 5 (b), the red poorer results obscure the green better results.
In figure 4 (d), is it fair to say that the main convergence speed improvement for homotopy-iADM is in going faster from "quite near" the optimum to "really near" it, rather than getting "quite near" it in the first place? If so, isn't the latter more often what's relevant in practical applications?
In figure 5, it's a little confusing to switch color meanings between (a) and (b).
Reweighting seems to have a dramatic positive effect in figure 5. Is it worth investigating its effect in the other experiments as well, for example in figure 4?
In figure 6 (b), is there any reason not to compare against standard black box optimizers like vanilla SGD, ADAM, etc (possibly with projection to satisfy the sphere constraint if necessary)? Would they perform very badly?
Typo "Whilst $a_0$ nor $x_0$" should be "Whilst $a_0$ and $x_0$".
What does the square-boxed convolution operator in (8) mean? |
iclr_2020_HyxnH64KwS | In environments with continuous state and action spaces, state-of-the-art actor-critic reinforcement learning algorithms can solve very complex problems, yet can also fail in environments that seem trivial, but the reason for such failures is still poorly understood. In this paper, we contribute a formal explanation of these failures in the particular case of sparse reward and deterministic environments. First, using a very elementary control problem, we illustrate that the learning process can get stuck into a fixed point corresponding to a poor solution. Then, generalizing from the studied example, we provide a detailed analysis of the underlying mechanisms which results in a new understanding of one of the convergence regimes of these algorithms. The resulting perspective casts a new light on already existing solutions to the issues we have highlighted, and suggests other potential approaches. | The paper investigates why DDPG can sometimes fail in environments with sparse rewards. It presents a simple environment that helps the reader build intuition and supports the paper's empirical investigation. First, the paper shows that DDPG fails on the simple environment in ~6% of cases, despite the solution being trivial—go left from the start state. The paper then augments DDPG with epsilon-greedy-style exploration to see if the cause of these failures is simply inadequate exploration. Surprisingly, in 1% of cases DDPG still fails. The paper shows that even in these failure cases there were still rewarded transitions that could have been learned from, and investigates relationships between properties of individual runs and the likelihood of failure. The paper then explains how these failures occur: the policy drifts to always going right, and the critic converges to a piecewise constant function whose gradient goes to zero and prevents further updates to the policy. The paper then generalizes this deadlock mechanism to other continuous-action actor-critic algorithms like TD3 and discusses how function approximation helps mitigate this issue. Finally, the paper gives a brief overview of some existing potential solutions.
Currently, I recommend rejecting this paper; while it is very well-written and rigorously investigates a problem with a popular algorithm, the paper does not actually present a novel solution method. It does a great job of clearly defining and investigating a problem and shows how others have attempted to solve it in the past, but stops there. It doesn't propose a new method and/or compare the existing methods empirically, nor does it recommend a specific solution method.
A much smaller concern is that the problem investigated is somewhat niche; it happens in a very small percentage of runs and is mitigated by the type of function approximation commonly used. This lowers its potential impact a little.
The introduction does a great job of motivating the paper. I would've liked the related work section to elaborate more on the relationships between the insights in this paper and those of Fujimoto et al. (2018a), since the paper says the insights are related. Section 3 gave a very clear overview of DDPG. The simple environment described in section 4 was intuitive and explained well, and the empirical studies were convincing.
However, after section 4 established the existence of the deadlock problem in DDPG, I was expecting the rest of the paper to present a solution method and empirically validate it. Instead, section 5 generalizes the findings from section 4 to include other environments and algorithms. I felt that section 4 and 5 could have been combined and shortened, and the extra space used to explain and empirically test a novel solution method. For example, figures 6 and 7 seem to be conveying similar information, but figure 7 takes up almost half a page.
Currently this paper seems like a good candidate for a workshop. It convincingly establishes the existence of a problem and shows what causes the problem to occur, but doesn't contribute a solution to the problem. Submitting it to a workshop could be a good opportunity for discussion, feedback, and possible collaboration, all of which might help inspire a solution method. |
iclr_2020_SJgSflHKDr | Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to lower bound the change of performance when transferring from training to test set with the Wasserstein distance between the embedded training and test set distribution. We find that there is a trade-off affecting performance between how invariant a function is to changes in training and test distribution and how large this shift in distribution is. | The authors consider the relation between Frechet distance of training and test distribution and the generalization gap. The authors derive the lower bound for the difference of loss function w.r.t. training and test set by the Wasserstein distance between embedding training and test set distribution. Empirically, the authors illustrate a strong correlation between test performance and the distance in distributions between training and test set.
The motivation to find the relation between generalization gap and the Frechet distance of training and test distribution is sound. However, I am not sure that the lower bound as in Equation (1) is enough. I am curious that one can derive the upper bound for the relation or not. The finding about choosing a training data distribution should be close to the test data distribution seems quite trivial in some sense. I am not clear about its important since it is quite popular that the distribution shift affects the performance and many learning approach assumes same distribution for training and test data. Overall I feel that the contribution may be quite weak, and I lean on the negative side.
Below are some of my concerns:
1) About the lower-bound in Equation (1), it seems unclear to me that when the W_2(p1, p2) = 0, we can inference any information about the test performance (It seems quite trivial for this case, the left hand side time is greater than or equal 0?) In my opinion, the upper-bound is more important which one can inference much information about the difference of generalization gap.
2) In the proof of Theorem 1, it is quite hard to follow with the current notation, for the integral in (i), (ii) as well as in the proof using the intermediate value theorem, which variables are used? I am confused which one is variable, which one is constants in those integrals.
3) In page 5, at the interpretation (1), for W2(p1, p2) = 0, the learned function fits training distribution perfectly and is not ill-conditioned ==> why one can deduce that the test distribution is fit perfectly? What we have in Theorem 1 is the lower-bound only? |
iclr_2020_rJePwgSYwB | Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity. | I have read the authors response. In the response the authors clarified the contributions of this paper. I agree with the authors that the analysis of gradient descent-ascent is a difficult problem, and the optimization results given in this paper is a contribution of importance. Because of this I have improved my score.
However, I do not agree with the authors that studying quadratic discriminators instead of more complicated ones should be considered as a contribution instead of drawback. In my opinion, as long as the focus is on WGAN, results involving standard neural networks are still more desired compared with the results in this submission. For example, similar results for a neural network discriminator might be even more impactful, because the optimization problem is even more difficult. Therefore I still consider the simple discriminator and generator as a weak point of this paper.
======================================================================================================
This paper studies the training of WGANs with stochastic gradient descent. The authors show that for one-layer generator network and quadratic discriminator, if the target distribution is modeled by a teacher network same as the generator, then stochastic gradient descent-ascent can learn this target distribution in polynomial time. The authors also provide sample complexity results.
The paper is well-written and the theoretical analysis seems to be valid and complete. However, I think the WGANs studied in this paper are simplified too much that the analysis can no longer capture the true nature of WGAN training.
First, the paper only studies linear and quadratic discriminators. This is not very consistent with the original intuition of WGAN, which is to use the worst Lipschitz continuous neural network to approximate the worst function in the set of all Lipschitz continuous functions in the definition of Wasserstein distance. When the discriminator is as simple as linear or quadratic functions, there is pretty much no “Wasserstein” in the optimization problem.
Moreover, the claim that SGD learns one-layer networks can be very misleading. In fact what is a “one-layer” neural network?
- if the authors meant “two-layer network” or “single hidden layer network”, then this is not true. Because as far as I can tell, the model $x = B \phi(A z)$ is much more difficult than the model $x = \phi(A z)$. The former is a standard single hidden layer network which is non-convex, while the latter is essentially a linear model especially when \phi is known.
- if the authors meant “a linear model with elementwise monotonic transform”, then I would like to suggest that a more appropriate name should be used to avoid unnecessary confusion.
As previously mentioned, the discriminators are too simple to approximate the Wasserstein distance, and therefore in general it should not be possible to guarantee recovery of the true data distribution. However, in this paper it is still shown that certain true distributions can be learned. This is due to the extremely simplified true model. In fact, even if the activation function $\phi$ is unknown, it seems that one can still learn $A^* (A^*)^\top$ well (for example, by Kendall’s tau). |
iclr_2020_HJxdTxHYvB | To deflect adversarial attacks, a range of "certified" classifiers have been proposed. In addition to labeling an image, certified classifiers produce (when possible) a certificate guaranteeing that the input image is not an p -bounded adversarial example. We present a new attack that exploits not only the labelling function of a classifier, but also the certificate generator. The proposed method applies large perturbations that place images far from a class boundary while maintaining the imperceptibility property of adversarial examples. The proposed "Shadow Attack" causes certifiably robust networks to mislabel an image and simultaneously produce a "spoofed" certificate of robustness. | The paper proposes a new way to generate adversarial images that are perturbed based on natural images called Shadow Attach. The generated adversarial images are imperceptible and have a large norm to escape the certification regions. The proposed method incorporates the quantities of total variation of the perturbation, change in the mean of each color channel, and dissimilarity between channels, into the loss function, to make sure the generate adversarial images are smooth and natural. Quantitative studies on CIFAR-10 and ImageNet shows that the new attack method can generate adversarial images that have larger certified radii than natural images. To further improve the paper, it would be great if the authors can address the following questions:
- In Table 1, for ImageNet, Shadow Attach does not always generate adversarial examples that have on average larger certified radii than the natural parallel, at least for sigma=0.5 and 1.0. Could the authors explain the reason?
- In Table 2, it is not clear to me what is the point for comparing errors of the natural images (which measures the misclassification rate of a natural image) and that of the adversarial images (which measures successful attacks rate), and why this comparison helps to support the claim that the attack results in a stronger certificates. In my opinion, to support the above claim, shouldn’t the authors provide a similar table as Table 1, directly comparing the certified radii of the natural images and adversarial images?
- From Figure 9, we see the certificate radii of the natural have at least two peaks. Though on average the certificate radii of the adversarial attacks is higher than that of the natural images, it is smaller than the right peak. Could the authors elaborate more of the results?
- Sim(delta) should be Dissim(delta) which measures the dissimilarity between channels. A smaller dissimilarity suggests a greater similarity between channels.
- Lambda sim and lambda s are used interchangeably. Please make it consistent.
- The caption of Table 1 is a little vague. Please clearly state the meaning of the numbers in the table. |
iclr_2020_H1lfwAVFwr | Biological and artificial agents must learn to act optimally in spite of a limited capacity for processing, storing, and attending to information. We formalize this type of bounded rationality in terms of an information-theoretic constraint on the complexity of policies that agents seek to learn. We present the Capacity-Limited Reinforcement Learning (CLRL) objective which defines an optimal policy subject to an information capacity constraint. This objective is optimized by drawing from methods used in rate distortion theory and information theory, and applied to the reinforcement learning setting. Using this objective we implement a novel Capacity-Limited Actor-Critic (CLAC) algorithm and situate it within a broader family of RL algorithms such as the Soft Actor Critic (SAC) and discuss their similarities and differences. Our experiments show that compared to alternative approaches, CLAC offers improvements in generalization between training and modified test environments. This is achieved in the CLAC model while displaying high sample efficiency and minimal requirements for hyper-parameter tuning. | The authors propose capacity-limited reinforcement learning and apply an actor-critic method (CLAC) in some continuous control domains. The authors claim that CLAC gives improvements in generalization from training to modified test environments, and that it shows high sample efficiency and requires minimal hyper-parameter tuning.
The introduction started off making me think about this area in a new way, but as the paper continued I started to find some issues. To begin with, I think the motivation in the introduction could be improved. Why would I choose to limit capacity? This is not sufficiently motivated. I suspect that the author(s) want to argue that it *should* give better generalization, but this argument is not made very clearly in the introduction. Perhaps this is because it would be difficult to make this argument formally, and so it is merely suggested at?
Are there connections between this and things like variational intrinsic control (VIC, Gregor et al. 2016) and diversity is all you need (DIAYN, Eysenbach et al., 2019)? These works aim to maximize the mutual information between latent variable policies and states/trajectories, whereas this work is really doing the opposite. I would be interested in understanding the author’s take on how the two are related conceptually.
Moving to the connections with past work, this paper seriously abuses notation in a way that actually hinders comprehension. Some of the parts that really bothered me, and should be fixed to be correct:
Mutual information is a function of two random variables, whereas it is repeatedly expressed as a function of the policy. Being explicit about the random variables / distribution here is pretty important.
In Equation 2 (and subsequent paragraph) the marginal distributions p_a(a) and p_s(s) are not well defined, marginalizing over what, what are these distributions? I might guess that p_s(s) is the steady state distribution under a policy pi, and that p_a(a) is marginalizing over the same distribution, essentially capturing the prior probability of each action under the policy. But these sort of things need to be said explicitly.
In KL-RL section there is a sentence with “This allows us to define KL-RL to be the case where p_0(a, s) = \pi_0(a_t | s_t).” What does this actually mean? One of these is a joint probability for state and action, and one is an action probability conditional on a state.
What does \pi_\mu(a_t) \sim \mathcal{D} mean?
In the block just before Algorithm 1, many of these symbols are never defined. This needs a significant amount of care (by the authors) and right now relies on the reader to simply make a best guess at what the authors probably intend.
Overall in the first three sections the message I would like the authors to understand is that, in striving for a concise explanation they have significantly overshot. These sections require some significant work to be considered publishable.
The experiment in section 4.1 is intended to give a clean intuitive understanding of the method, but falls a bit short here. It is clean, but I needed more explanation to really drive the intuition home. I see that CLAC finds a solution more sensitive to the beta distribution, but help me understand why this is the right solution in this particular case.
I really disagree with the conclusions around the experiments in section 4.2. I do not think these results show that for the CLAC model increasing the mutual information coefficient increases performance on the perturbed environments. First, the obvious, how many seeds and where are the standard deviations? Second, the trend is extremely small and the gap between CLAC and SAC is just as minor. Finally, CLAC has better performance on the training distribution which means that it actually lost *more* performance than SAC when transferring to the testing and extreme testing distributions.
The results for section 4.3 are just not significant enough to draw any real conclusions. The massive temporal variability makes me very suspicious of those super tight error bands, but even without that question, the gap is just not very large.
Finally, in section 4.4 we see the first somewhat convincing experimental results. These look reasonable, but even here I have a fairly pointed question: compared with the results in Packer et al (2018) the amount of regression from training to testing is extremely large (whereas they found vanilla algorithms transfer surprisingly well). Can you explain why there is such a big discrepancy between those results and these? But again, this section’s results are in my opinion the most convincing that something interesting is happening here.
Lastly, in section 8.1 the range of hyper-parameters for the mutual information coefficient is very broad, which really makes it hard to buy the claim of requiring minimal hyper-parameter tuning.
All in all there is something truly interesting in this work, but in the present state I am unable to recommend acceptance, and the amount of work required along with questions raised lead me to be fairly confident in this assessment. |
iclr_2020_H1gpET4YDB | We present BlockBERT, a lightweight and efficient BERT model that is designed to better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training time, which also enables attention heads to capture either short-or long-range contextual information. We conduct experiments on several benchmark question answering datasets with various paragraph lengths. Results show that BlockBERT uses 18.7-36.1% less memory and reduces the training time by 12.0-25.1%, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa. | The authors propose BlockBERT, a model that makes the attention matrix of Transformer models sparse by introducing block structure. This has the dual benefit of reducing memory and reducing training time. The authors show on various question answering tasks that their model is competitive with RoBERTa.
1. Can the authors add additional details about the training of their model in Section 4.1? It is not clear for example the vocabulary size - is that the RoBERTa vocabulary size of around 50k or the BERT vocabulary size that is smaller? I believe this will affect the memory and training speed.
2. For the datasets such as TriviaQA and SearchQA, how is RoBERTa finetuned on these tasks? By doing the window approach?
3. The authors compare to RoBERTa and Sparse BERT as baselines for the section on performance. However, can the authors also include metrics on training time and memory in Table 2 for Sparse BERT as well as other sparse attention transformer architectures proposed (for example the Correia paper or the Sukhbaatar paper)? It is not clear the savings from this architecture compared to sparse Transformers in general.
4. The analysis in Section 4.3 is quite unclear due to how compressed the description is and how tiny the graphs are.
5. The authors mention that attention heads can be sparsified due to the memory usage and quadratic time complexity. Other work has also shown that the attention heads are quite redundant and can be pruned away, so attention head dropout is effective. For example https://arxiv.org/abs/1905.09418 |
iclr_2020_rkePU0VYDr | Many defenses for Convolutional Neural Networks are based on a simple observation that the adversarial examples are not robust and small perturbations to the attacking input often recover the desired prediction. Intuitively, the adversarial examples occupy a very small cone in the decision space which is surrounded by a large area corresponding to the correct class. While the intuition is simple, a detailed understanding of this phenomenon is missing from the research literature. We identify a family of defense techniques that are based on the instability assumption. The defenses include deterministic lossy compression algorithms and randomized perturbations to the input that all lead to similar gains in robustness. We present a comprehensive experimental analysis of when and why perturbation defenses work and potential mechanisms that could explain their effectiveness (or ineffectiveness) in different settings. | The paper studies the robustness of adversarial attacks to transformations of their output. Specifically, for standard methods for crafting adversarial examples, the authors evaluate whether the crafted examples remain adversarial under (stochastic or deterministic) transformations such as Gaussian noise and SVD compression. The authors argue that different transformations have a similar impact on the perturbed inputs. Finally, they argue that the reason why these transformations can sometimes recover the correct label is due to the loss being more unstable at these points (from a first- and second-order pespective).
From a conceptual point of view, I did not find the paper particularly impactful. In particular, I can identify three claimed contributions:
a) The L2 distortion introduced by these transformation might have more impact than the specific transformation used (figure 1).
Firstly, I would argue that this effect is most prominent for small L2 distortions (distortions larger than 5/40 do exhibit noticeable differences). Secondly, I am not sure why one would expect these transformations to have fundamentally different impact. The adversarial attacks considered manipulate the input using first-order methods in input space. Since the transformations are model- and data-agnostic it seems expected that the primary mechanism behind their effect is the pixel-wise distortion of the image.
b) Adversarial perturbations that are robust to one type of transformation tend to also be robust to other transformations (table 2).
This is probably the most interesting observation of the paper, indicating that attackers can bypass several of these transformation defenses by only aiming to be robust to a subset of them. At the same time, I am not sure what the impact of this observation is given that we already know how to bypass most of these defenses anyway.
c) The instability of adversarial perturbations can be explained by a larger gradient norm and Hessian spectrum (figure 2, 3).
First of all, I do not understand how this is considered a potential explanation of empirical behavior. If gradient norm is indicative of instability, then _natural_ images would be unstable (since the gradient wrt the wrong class is large). Furthermore, instability does not explain why adding noise leads to the _correct_ class as opposed to a random class. Perhaps most importantly, from what I understand (also skimming the code), Figure 2 plots the gradient of the _cross-entropy loss_ with respect to each class. However, the norm of the gradient is directly affected by the softmax probability of each class. Hence, for natural images, the probability of the correct class is high leading to a small norm while for the adversarial class it is low leading to large norm. Based on this reasoning, this plot does convey information about the classifier's stability but rather about the softmax probabilites assigned to each class.
In summary, after reading the paper, I am not sure how our understanding of transformation robustness has changed or what insight we have gained that will help us design future attacks and defenses (especially given prior work on how most of these defenses can be bypassed). I thus recommend rejection.
Comments to authors:
-- Many of the attack details are missing (what is the epsilon allowed for PGD? what does the "c" parameter correspond to for CW attacks?) which prevented me from fully comprehending the experimental results.
-- LBFGS is not a particular attack, it is a general optimization algorithm, https://en.wikipedia.org/wiki/Limited-memory_BFGS
-- The fact that adversarial points are close to correctly classified points does not mean that a small amount of random noise will change the classifier prediction. In high dimension, the distance to a hyperplane can be epsilon but moving in a random direction requires movement of epsilon * sqrt(number of dimensions) .
==============
UPDATE: I appreciate the authors' response. As stated in my original review, I do recognize the performance of a non-adaptive adversary as a contribution. However, I still don't think that the impact of this finding is significant enough for publication at ICLR. I hence keep my original score.
Response to specific points:
-- Unfortunately, the authors did not address my concerns about the gradient of the loss. The experiments and reasoning of the paper should be sufficient without citing other work in the author response. Moreover, it is unclear if the papers referenced in the response are showing fundamental properties of adversarial examples or are specific to the attacks considered. For instance, there has been work challenging these papers (https://arxiv.org/abs/1907.12138).
-- As mentioned in my original review, measuring the norm of the gradient of the softmax loss will inevitably take into account the predicted probability of each class. Hence in order to truly measure sensitivity it would be important to work with the logits instead of the softmax probabilities. |
iclr_2020_HJgK0h4Ywr | We make two theoretical contributions to disentanglement learning by (a) defining precise semantics of disentangled representations, and (b) establishing robust metrics for evaluation. First, we characterize the concept "disentangled representations" used in supervised and unsupervised methods along three dimensionsinformativeness, separability and interpretability-which can be expressed and quantified explicitly using information-theoretic constructs. This helps explain the behaviors of several well-known disentanglement learning models. We then propose robust metrics for measuring informativeness, separability, and interpretability. Through a comprehensive suite of experiments, we show that our metrics correctly characterize the representations learned by different methods and are consistent with qualitative (visual) results. Thus, the metrics allow disentanglement learning methods to be compared on a fair ground. We also empirically uncovered new interesting properties of VAE-based methods and interpreted them with our formulation. These findings are promising and hopefully will encourage the design of more theoretically driven models for learning disentangled representations. | Summary:
The paper presents a new set of metrics for evaluating disentangled representations in both supervised and unsupervised settings. Disentangled representations are evaluated along three dimensions: informativeness, separability, and interpretability. While previous work offers metrics for similar dimensions (e.g., (Eastwood & Williams, 2018)), the paper suggests that the metrics of the submission are superior to (Eastwood & Williams, 2018), based on a comparison between FactorVAE and Beta-VAE.
Strengths:
(1) The metrics of the paper are well motivated from an information-theoretic standpoint. While, in some cases, the metrics themselves are straight applications of information theory (e.g., informativeness metric == mutual information), the authors came up with new metrics when existing information-theoretic definitions could be fooled by, e.g., introducing noisy and independent latent representations z.
(2) Experimental results show that these metrics are better than previous metrics at discriminating between FactorVAE and Beta-VAE (in an experimental setting where the former is clearly superior to the latter).
(3) As opposed to much of prior work, these metrics do not require any training, which can be considered a plus as they do not require adaptation for each (sub)domain.
(4) The paper is overall well written and clear. I very much enjoyed reading it.
Weaknesses:
(1) My main concern with the paper, which makes me vote for “weak accept” instead of “accept”: The metrics of the paper are compared against previous metrics (Eastwood & Williams, 2018) on *only two* types of disentangled representations, namely FactorVAE and Beta-VAE, and in one experimental condition. (There is plenty of other experimental material in the appendix which also introduces AAE, but none of it seems to compare against previous *metrics*). I think comparing metrics on only two systems is somewhat problematic, and we don’t know how general the results are. I would have preferred to see FactorVAE and Beta-VAE evaluated with fewer hyperparameter choices to allow introducing more variational approaches. Could you (the authors) at least evaluate AAE against Eastwood & Williams too? This concern is not just mere quibbling, as the authors have shown (e.g., in Section 3.2) that specific edge cases can fool naïve metrics (e.g., by introducing noisy and independent latent variables), and there may be other edge cases that the authors have not considered. I think evaluating new metrics against previous work (i.e., metrics) on only two underlying systems is not very convincing. I acknowledge that the paper offers *lots* of experiments – the full pdf is 30 pages (!) – but I think some of the existing ones could go to leave space for evaluations using more underlying VAE/baselines/edge-case systems.
(2) The paper presents six metrics (MI, MISJED, WSEPIN, WINDIN, RMIG, JEMMIG) along three dimensions. While each individual metric makes sense, I feel the paper lacks a discussion section that ties these pieces together and suggests a way of using these metrics conjointly across the three dimensions towards building better-disentangled representations (the paper has a “discussion” section, but it is very short and actually more of a conclusion). The more metrics we have, the more chances each underlying VAE model can “win” on one of the metrics, which is not particularly enlightening.
(3) The paper is not self-contained, and some parts are almost impossible to understand without familiarity with (Kim & Mnih, 2018) and (Higgins et al., 2017a). It uses technical terms of these papers without explanations (e.g., TC is not even spelled out it seems). Hyperparameters of these papers (e.g., beta, gamma) are used without explanation.
(4) There is no related work section. Such a section could, e.g., make what is borrowed from (Kim & Mnih, 2018; Higgins et al., 2017a) more understandable.
Overall, I think it is a nice paper that makes significant contributions to the problem of building better disentangles representations thanks to better evaluation metrics. The empirical support for some of the claims (e.g., superiority to (Eastwood & Williams, 2018)) is a bit weak, but other strengths mentioned above largely make up for that.
Minor comments:
The definitions of SEPIN@k and INDIN@k don’t seem quite right. The summation iterates over z_0, …, z_k-1, leaving off z_k, …, z_L-1, but the latter variables might contain some z’s with lowest mutual information with x. The way the ‘sorted’ function is written only has the effect of reordering z_0, …, z_{k-1} among themselves, never considering any of the z_k and above whatever their MI’s are, which is probably not what the authors meant. Perhaps ‘sorted’ is intended to both sort and rename z’s, but if z’s are indeed renamed I think this should be indicated mathematically otherwise the equation is wrong (e.g., (z’_1, …, z’_{L-1}) = sorted_by_MI(z_1, …, z_{L_1})).
Typos in WSEPIN and WINDIN definitions: “0 = 1” -> “i = 1”?
Figure 1: “Image”? The paper is written in general terms and doesn’t seem to assume otherwise x is an image representation (or does it?).
Section 3.4: “Table. 1” -> “Table 1”
Table 1 vs. title of Section 3.1: The author’s informativeness metric doesn’t have a name, but the section title contains “informativeness,” which could be confusing vs. “informativeness” in Table 1 which is a completely different metric.
Page 8, interpretability: “TC=10”. Do you mean “TC loss” here? I presume 10 is the value of a hyperparameter of the FactorVAE paper (gamma?), and not the actual value of the TC term. Same question for Figure 9 and later figures of the appendix. |
iclr_2020_Hkee1JBKwB | Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames. Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations. A potential solution is to extend to higher-order recurrent models. However, such a model requires a large number of parameters and operations, making it intractable to learn and prone to overfitting in practice. In this work, we propose Convolutional TensorTrain LSTM (Conv-TT-LSTM), which learns higher-order Convolutional Long Short-Term Memory (ConvLSTM) efficiently using Convolutional Tensor-Train Decomposition (CTTD). Our proposed model naturally incorporates higher-order spatio-temporal information with low memory and computational requirements by efficient low-rank tensor-train representations. We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and other ConvLSTM-based approaches, but with much fewer parameters. | This paper proposed a convolutional tensor-train (CTT) format based high-order and convolutional LSTM approach for long-term video prediction. This paper is well-motivated. Video data usually have high dimensional input, and the proposed method aims to explicitly take into account more than one hidden representation of previous frames - both lead to a huge number of parameters. Therefore, some sort of parameter reduction is needed. This paper considers two different types of operations - convolution and tensor-train (TT) decomposition - in an interleaved way. The basic model considered in this paper is a high-order variant of convolutional LSTM (convLSTM).
There exist several works using tensor decomposition methods including TT to compress a fully connected layer or a convolutional layer in neural nets, to break the memory bottleneck and accelerate computation. This paper takes a different direction - it further embeds convolution into the TT decomposition and thus defines a new type of tensor decomposition, termed convolutional tensor train (CTT) decomposition. CTT is used to represent the huge weight matrices arisen in the high-order convLSTM. To my best knowledge, this combination of convolution and TT decomposition is new.
The paper is well-written as the literature review is well done. Experimental results demonstrate improved performance over the convolutional LSTM baseline, a fewer number of parameters, and the qualitative results show sharp and clean digits. This improvement could be attributed to multiple causes: the high-order, the tensor decomposition-based compression, or the CTT. The authors also provide an ablation study, but it mainly concerns comparisons with ConvLSTM.
Despite the promising results, this paper is not ready for ICLR yet. Below is a list of suggested points needed to address:
(1) Yang et al 2017 claim that TT-RNN without convolution can also capture spatial and temporal dependence patterns in video modeling. This is an important baseline but missing in the current version of the paper.
(2) The justification of high-order modeling in long-term prediction. The first-order model also implicitly aggregates multiple past steps. It would be good to add more experimental evidence to support the necessity of the high-order.
(3) There exists some unjustified complexity for the CTT approach. How does it compare to TT for high-order ConvLSTM?
Perhaps, a more complete ablation study should include:
(1) LSTM with TT but without high-order and convolution
(2) LSTM with high-order and TT but without convolution
(3) ConvLSTM with TT
(4) ConvLSTM with CTT
(5) ConvLSTM with high-order and TT
(6) ConvLSTM with high-order and CTT
Question:
• How is the backpropagation done for the CTT core tensors?
• What is the error propagation issue of first-order methods and how does the high-order one not prone to it? |
iclr_2020_rylmoxrFDH | The training of stochastic neural network models with binary (±1) weights and activations via continuous surrogate networks is investigated. We derive new surrogates using a novel derivation based on writing the stochastic neural network as a Markov chain. This derivation also encompasses existing variants of the surrogates presented in the literature. Following this, we theoretically study the surrogates at initialisation. We derive, using mean field theory, a set of scalar equations describing how input signals propagate through the randomly initialised networks. The equations reveal whether so-called critical initialisations exist for each surrogate network, where the network can be trained to arbitrary depth. Moreover, we predict theoretically and confirm numerically, that common weight initialisation schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance. This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to ±1, for deeper networks to be trainable. | The paper provides an in-depth exploration of stochastic binary networks, continuous surrogates, and their training dynamics with some potentially actionable insights on how to initialize weights for best performance. This topic is relevant and the paper would have more impact if its structure, presentation and formalism could be improved. Overall it lacks clarity in the presentation of the results, the assumptions made are not always clearly stated and the split between previous work and original derivations should be improved.
In particular in section 2.1, the author should state what exactly the mean-field approximation is and at which step it is required (e.g. independence is assumed to apply the CLT but this is not clearly stated). Section 3 should also clearly state the assumptions made. That section just follows the “background” section where different works treating different cases are mentioned and it is important to restate here which cases this paper specifically considers. Aside from making assumptions clearer, it would be helpful to highlight the specific contributions of the paper so we can easily see the distinctions between straightforward adaptations of previous work and new contributions.
Specific questions:
It might be worth double checking the equation between eq. (2) and eq. (3) , the boundary case (l=0) does not make sense to me, in particular what is S^0 ?.
What does the term hat{p}(x^l) mean in the left hand side of eq.(3)?
In eq. (7) (8) why use the definition symbol := ?
At the beginning of section 3.1, please indicate what “matcal(M)” precisely refers to. Using the term P(mathcal(M) = M_ij) does not make much sense if the intent is to use a continuous distribution for the means.
Just after eq. (9), please explain what Xi_{c*} means.
Small typo: Eq. (10) is introduced as “can be read from the vector equation 31”, what is eq. (31)?
In section 5.2, why reducing the training set size to 25% of MNIST? |
iclr_2020_BJxwPJHFwS | Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees. However, previous methods are usually limited to relatively simple neural networks. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work. We resolve these challenges and develop the first verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis. | Summary:
This paper builds upon the CROWN framework (Zhang et al 2018) to provide robustness verification for transformers. The CROWN framework is based upon the idea of propagating linear bounds and has been applied to architectures like MLP, CNNs and RNNs. However, in Transformers, the presence of cross-nonlinearities and cross-position dependencies makes the backward propagation of bounds in CROWN computationally intensive. A major contribution of this paper is to use forward propagation of bounds in self attention layers along with the usual back-propagation of bounds in all other layers. The proposed method provides overall reduction in computational complexity by a factor of O(n). Although the fully forward propagation leads to loose bounds, the mixed approach (forward-backward) presented in this work provides bounds which are as tight as fully backward method.
Strength:
Use of forward propagation to reduce computational complexity is non-trivial
Strong results on two text classification datasets:
Lower bounds obtained are significantly tighter than IBP
The proposed method is an order of magnitude faster than fully backward propagation, while still maintaining the bounds tight.
Weakness:
Experiments only cover the task of text classification. Experiments on other tasks utilizing transformers would have made the results stronger.
The paper makes the simplifying assumption that only a single position of an input sentence will be perturbed. They claim that generalizing to multiple positions is easy in their setup but that is not supported. The paper needs to declare this assumption early on in the paper (abstract and intro). As far as I could tell, even during the experiments they perturb a single word at a time.
The paper is more technical than insightful. I am not at all convinced from a practitioners viewpoint that such bounds are useful. However, given the hotness of this topic, someone or the other got to work out the details. If the math is correct, then this paper can be it.
The presentation requires improvement. Some parts, example, the Discussion section cannot be understood.
Questions:
The set-up in the paper assumes only one position in the input sequence is perturbed for simplicity. Does the analysis remain the same when multiple positions are perturbed?
Suggestions:
A diagram to describe the forward and backward process would significantly improve the understanding of the reader.
In Table 3, I am surprised that none of the sentiment bearing words were selected as the top-word by any of the methods. Among the ‘best’ words, the words chosen by their method does not seem better than those selected by the grad method.
Several typos in the paper: spelling mistakes in “obtain a safty guarantee”, poor sentence construction in “and independent on embedding distributions”, subject verb disagreement in “Upper bounds are discrete and rely on the distribution of words in the embedding space and thus it cannot well verify”.
I have not verified the math to see if they indeed compute a lower bound. |
iclr_2020_r1lF_CEYwS | ML algorithms or models, especially deep neural networks (DNNs), have shown significant promise in several areas. However, recently researchers have demonstrated that ML algorithms, especially DNNs, are vulnerable to adversarial examples (slightly perturbed samples that cause mis-classification). Existence of adversarial examples has hindered deployment of ML algorithms in safety-critical sectors, such as security. Several defenses for adversarial examples exist in the literature. One of the important classes of defenses are manifold-based defenses, where a sample is "pulled back" into the data manifold before classifying. These defenses rely on the manifold assumption (data lie in a manifold of lower dimension than the input space). These defenses use a generative model to approximate the input distribution. This paper asks the following question: do the generative models used in manifold-based defenses need to be topology-aware? Our paper suggests the answer is yes. We provide theoretical and empirical evidence to support our claim. | I. Summary of the Paper
This paper studies robustness to adversarial examples from the
perspective of having 'topology-aware' generative models. Next
to some experiments on data sets with a manifold structure, the
main contribution of the paper is a tandem of theorems that state
the conditions under which models can recover the topology---or
the number of connected components---of a data set correctly,
thereby making them more robust to adversarial examples.
II. Summary of the Review
This paper provides a novel perspective on adversarial examples through
the lens of Riemannian Geometry and topology. I appreciate novel
research that employs topology-based methods, but at present, I cannot
fully endorse accepting the paper. Specifically, I see the following
issues:
- Missing clarity: while the appendix is very comprehensive, which
I appreciate, the main text could be improved; some statements appear
redundant, while others need to be re-formulated to build intuition
- This paper appears to span both theory and applications. I appreciate
this attempt, knowing full well that this is no easy feat to
accomplish. However, the main theoretical result on the number of
connected components only applies to mixtures of Gaussian
distributions, but the purported scope of the paper is the analysis of
manifold-based defences in general. I would expect a more in-depth
discussion of the limitations of the theorem. Can we expect this to
generalise? Moreover, 'topology' is reduced to 'connected components'
in this paper. While this is perfectly adequate in the sense of
connected components being a particular concept from topology, I would
expect this to be clarified much earlier in the paper. In addition,
connected components are a very basic and coarse concept, so I am
wondering to what extent it is sufficient to describe models purely
based on that information.
- As a sort of corollary to the previous point, the experiments could be
improved. I like the idea of employing known data sets with a simple
manifold structure, but the setup is somewhat preliminary; I would
prefer to see an analysis of border cases or limit cases in which the
theorem _almost_ applies (or not); plus, a more in-depth analysis of
stochastic effects during training: do _all_ models end up being
robust if their number of connected components is sufficiently large?
Is there a dependency between the number of connected components and
vulnerabilities---are models with a very small number of connected
components more vulnerable than models with a very larger number of
connected components? The present experimental section is lacking this
depth.
- The same statements apply to the INC example. I found this super
instructive, but it is only _one_ case on _one_ manifold---I would
like to see more details here; maybe some of the experiments in the
appendix could be moved to the front? I have some suggestions for
shortening the paper (see below).
Despite these issues, I think this paper can be a strong contribution if
properly revised; since I am positive that at least some of these
suggestions could be performed within a revision cycle, I want to be
upfront and state that I will definitely consider raising my score,
provided that my concerns are addressed appropriately!
I have to state that I am _not_ an expert in adversarial examples, but
an expert in topology-based methods; I consider this paper to belong to
the latter field given its theoretical contributions about 'recovering'
the correct density distribution.
III. Clarity
The paper provides an extensive background to Riemannian geometry, which
I appreciated as a reference. Nevertheless, there are improvements to
the main text that I would suggest:
- Please consider changing the title to 'On the need...'
- The manifold assumption is that data lie _on_ a manifold or _close to_
a manifold whose intrinsic dimension is much lower than that of the
ambient space. This is not stated in sufficient precision in the
paper; please correct the usage on p. 1 and p. 2
- In terms of notation, why use $p_M$ to denote the density on the whole
of $\mathds{R}^n$? I would expect $p_M$ to refer to the density on $M$
rather than the density of the whole space.
- If $M$ is a disjoint union of manifolds, please consider using
a '\cupdot' operator to make this more clear.
- If the pairwise manifolds are disjoint, how can the resulting data
distribution still contain any ambiguities? I find this hard to
harmonise with the statement in Section 3.4 about the existence of
a classifier that separates the manifolds. Please clarify the meaning
behind the term 'ambiguities' here.
- The '(R0)' requirement definition and the discussion in Section 3.2
strike me as needlessly complex. Would it be possible to shorten this
or move some content to the appendix? I think it would be sufficient
to have Eq. 1 and mention how it could be solved.
- Section 3.3 could be shortened as well, if I am not mistaken; while it
is good to know how such models look, the 'change of variable formula'
is not used directly any more in the paper; I think it might be easier
to write down a generic form of the density for each model.
- The salient points of Section 3.4 seem to be the projection point;
maybe this could also be shortened somewhat in the interest of having
more space for experiments. The relevant information of this section
was to learn how projections work for different models, but it would
be sufficient to keep Eq. 3 and discuss Eq. 4 in the appendix
- The 'topological differences' mentioned in Section 4 could be
clarified: the paper talks about differences in connected components.
- The $\lambda$-density set appears to be a superlevel set, if I am not
mistaken: the level set would be defined for a single threshold only,
while this paper introduces the pre-image of an interval.
- I would expect the pre-image to be defined as $p^{-1}$, not $p_{-1}$;
the latter strikes me as somewhat non-standard usage
- The statements preceding Definition 1 require some more intuition;
what is the purpose of these assumptions?
- The notation for the Euclidean balls should be briefly introduced
before Definition 1. I recognise this as a standard notation, but
since this is the first appearance of the symbol, it should be
mentioned at least briefly.
- Definition 1 could also be phrased more intuitively; additional
sentence behind each definition would be useful, such as:
$\delta_\lambda$ is the largest $\delta$ such that the full
superlevel set is contained in a ball of radius $\delta_\lambda$.
Figure 1 is already helpful in that regard; it should ideally precede
the definition and/or be made larger to be more illustrative
- Maybe the results of Theorem 2 could already be stated earlier; it
could probably be explained reasonably well when describing the number
of connected components as a sort of 'baseline' topological complexity
that a model has to satisfy.
On a more abstract level, could it also be summarised as 'the
inclusion of prior knowledge is a necessary condition for robustness'?
- I would not state that the main goals of the experiments are to
'check the correctness of Theorem 2'---the proofs should be
responsible for this! I would rather say that the main goals are to
provide empirical evidence for the *relevance* or *applicability* of
the Theorem.
- The paragraph on 'Latent vector distributions' contains the most
relevant information, viz. the knowledge of what the paper considers
to be a 'topology-aware' and 'topology-ignorant' model; this should be
highlighted more; maybe the figures could be extended to contain
information about $n_X$?
Overall, I like the idea of having one overarching question in a paper
that is subsequently answered or discussed under different aspects. I
very much commend the authors of the paper for choosing this sort of
writing style!
IV. Experimental setup
As mentioned above, the experiments require more depth. I would propose
adding more repetitions of the training process for different data sets
and analysing the impact of the 'parameters' used in the theorems.
In particular, the 'INC' experiments show great promise for multiple
repetitions. Why not choose more data sets and more staring positions
and visualise the trajectories of _multiple_ draws, as shown for
a single draw in Figure 4 a,b,c?
Also, please consider moving the additional experiments from the
appendix to the main paper.
More compelling examples would also be helpful. Why not generate data
sets consisting of more than two manifolds? At present, the largest
issue I see in this section is that the conceptual 'leap of faith'
between the theory and the applications is simply too large. Would it
not be possible to perform the same experiments on a simple digits data
set, say MNIST?
V. Minor issues
The paper is well-written overall. There are only a few typos and minor
style issues that I would recommend fixing:
- please check the usage of quotes; it should be ``pulled back'', not ''pulled back'' in LaTeX
- please check the usage of citations; if '\citet' is used, citations
can be used as nouns directly (for example: 'in Pennec (1999)' instead
of 'in (Pennec, 1999)'.
- etc.. --> etc.
- it approximates posterior distribution --> it approximates a posterior distribution
- for compound distribution --> for a compound distribution
- a relatively simpler distribution --> a simpler distribution
- near manifold --> near a/the manifold
- we simply the minimum --> we denote (?) the minimum
- satisfies the followings --> satisfies the following properties
- number of connected component --> number of connected components
- Experimental result --> Experimental results
VI. Update after the rebuttal
The authors managed to address the majority of my comments. Overall, I still would like to see a more detailed/in-depth experimental setup, but I realise that this not directly possible within the timeframe allotted during the rebuttal period. I am thus raising my score. |
iclr_2020_SJlPOCEKvH | Universal feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective methods for improving deep learning models without requiring more labeled data. A common paradigm is to pre-train a feature extractor on large amounts of data then fine-tune it as part of a deep learning model on some downstream task (i.e. transfer learning). While effective, feature extractors like BERT may be prohibitively large for some deployment scenarios. We explore weight pruning for BERT and ask: how does compression during pre-training affect transfer learning? We find that pruning affects transfer learning in three broad regimes. Low levels of pruning (30-40%) do not affect pre-training loss or transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. High levels of pruning additionally prevent models from fitting downstream datasets, leading to further degradation. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability. We conclude that BERT can be pruned once during pre-training rather than separately for each task without affecting performance. | This work is an empirical study of testing how pruning at the pre-training stage affects subsequent transfer learning (through fine-tuning) stage. The main idea is to carefully control the amount of sparsity injected into BERT through weight magnitude pruning and study the impact on accuracy. The experimental setup is mostly well done, especially the part that disentangles the complexity restriction and information deletion. During the exploration, the authors made several interesting observations, such as 30-40% model weights do not encode any useful inductive bias, which could help shed some light for future work on both training and compressing BERT-like models.
Overall, the paper is well written and explained. The goal is meaningful, and this is a sensible contribution to the ongoing interests of compressing BERT-like large models for efficient training and inference.
My major concern is on its novelty and how directly it can provide benefit to computation. First, although the findings are interesting, the methods used in this paper are not new. Various pruning techniques have been explored in prior work, which makes the novelty contribution of this paper somewhat limited.
Furthermore, the study has mostly focused on the impact of random sparsity to accuracy. However, as it is known that it is really difficult for modern hardware to benefit from random sparsity because it leads to irregular memory accesses, which negatively impact the performance. It has been observed that speedups are very limited or can be negative even the random sparsity is >95% [1]. Therefore, it is hard to judge how inference or training can benefit from 30-40% weight sparsity. Going forward, the authors are encouraged to choose pruning methods that lead to regular memory access to avoid adversely impacting practical acceleration in modern hardware platforms.
[1] Learning Structured Sparsity in Deep Neural Networks. Wen et al. NeurIPS 2016 |
iclr_2020_Skl6peHFwS | How well can hate speech concept be abstracted in order to inform automatic classification in codeswitched texts by machine learning classifiers? We explore different representations and empirically evaluate their predictiveness using both conventional and deep learning algorithms in identifying hate speech in a ~48k human-annotated dataset that contain mixed languages, a phenomenon common among multilingual speakers. This paper espouses a novel approach to handle this challenge by introducing a hierarchical approach that employs Latent Dirichlet Allocation to generate topic models that feed into another high-level feature set that we acronym PDC. PDC groups similar meaning words in word families during the preprocessing stage for supervised learning models. The high-level PDC features generated are based on Ombui et al, (2019) hate speech annotation framework that is informed by the triangular theory of hate (Stanberg,2003). Results obtained from frequency-based models using the PDC feature on the annotated dataset of ~48k short messages comprising of tweets generated during the 2012 and 2017 Kenyan presidential elections indicate an improvement on classification accuracy in identifying hate speech as compared to the baseline | Comments:
-This paper considers codeswitched hate speech texts from an NLP perspective. The dataset considers mixed languages.
-Focuses on kenyan presidential election.
-Paper has severe formatting issues as well as simple issues like capitalization. Additionally many plots are rather unattractive (seem to be produced using Excel or Google Sheets, whereas generally something like matplotlib or seaborn is preferred).
-The paper puts a lot of examples into the main text, whereas these are usually put into the appendix, or only a few examples are placed in the main text. Usually the main text focuses more on higher level analysis.
-Figure 2 is formatted incorrectly (the caption runs on to the next page)
-I appreciate the effort that went into data annotation as well as the disclosure of the demographics of annotators.
-Table 1 should make the metric much clearer (it's mentioned in the main text, but it should be in the caption too, also the best performance usually should be bolded)! Generally TF-idf features or PDC features seem to have the best performance. The performance of the CNN does not seem very strong. I think a simple RNN based approach might also be worth considering. It would also be worth analyzing if the differences between the methods is attributable to underfitting or overfitting.
-If the paper is proposing a new task with many baselines, it's also important to release the dataset and code in my opinion (I believe ICLR allows this to be done in a de-anonymized way).
Review:
This paper deals with an important problem in social media analysis. With the spread of hate speech and hate crimes by rioting separatists in Hong Kong as well as equally hateful attacks on Chinese people in the west, I think that this is an issue that deserves more attention in our community. Unfortunately this paper needs more polish to be appropriate for ICLR. It also might be better suited to an NLP focused conference (such as ACL, EMNLP, or NAACL) although I think if the technical contribution is clear enough it could be suitable for ICLR as well.
I think the big things to focus on would be including more baselines, improving polish in the paper, and providing a clearer high-level analysis of the dataset (with specific examples mostly left for the appendix). |
iclr_2020_Bkel6ertwS | The recent expansion of Machine Learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular. Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors (ChIP-Seq) and DNA spatial structure (Hi-C). Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains (TADs) in DNA structure. However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models. In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster. The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks. The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores. This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding. We identify informative epigenetic features that lead to the further conclusion of their biological significance. | The authors compared a bidirectional LSTM to traditional machine learning models for genomic contact prediction. Although the paper presents an interesting application of ML, I vote for rejection since i) the paper is similar to previously published works and lacks methodological novelty, ii) the description about the data and methods is not written clearly enough, and iii) the evaluation needs to be strengthened by additional baseline models and evaluation metrics.
Major comments
=============
1. As pointed out by Maria Anna Rapsomaniki, the paper is similar to previously published papers, which are not cited.
2. It is unclear which features were used as inputs. How were the features that are described in section 3.1 represented? Are these binary values or the mean of the Hi-C signal over the genomic region? What does 20kb mean? How were genomic segments of different chromosomes treated?
3. The motivation of using a biLSTM in section 4.3 is unclear. What has the choice of a biLSTM to do with the fact that DNA is double stranded? The DNA is not used as input to the model and genomic contacts can be formed between non-adjacent segments. Please compare recurrent models to non-recurrent models such as fully connected or convolutional networks.
4. The authors used the weighted MSE as evaluation metric, which was used as training loss of the biLSTM, but it is unclear if the same loss was used for linear and gradient boosting regression. To understand if the performance differences are due to the loss function or model architecture, the authors should use the same loss function for training all models, and use additional evaluation metrics such as the MSE and R^2 score.
5. The authors used linear regression weights to quantify feature importance (figure 4). It is unclear if the biLSTM assigned the same importance to features. To quantify the importance of features learned by the biLSTM, the authors could consider correlating the activations of LSTM units with input features, and also analyze if importances depend on the position of features.
6. Which hyper-parameters of the biLSTM and baseline method did the authors tune and how?
7. The authors should compare the described weighted MSE to the mean squared logarithmic error loss, which is commonly used for penalizing large errors less.
8. Why did the authors use the activation of the center LSTM hidden state instead of concatenating the last hidden state of the forward and reverse LSTM? By using the center hidden state, half of the forward and reverse LSTM activations are ignored. |
iclr_2020_HygpthEtvr | In this paper, we consider the problem of training structured neural networks (NN) with nonsmooth regularization (e.g. 1 -norm) and constraints (e.g. interval constraints). We formulate training as a constrained nonsmooth nonconvex optimization problem, and propose a convergent proximal-type stochastic gradient descent (Prox-SGD) algorithm. We show that under properly selected learning rates, with probability 1, every limit point of the sequence generated by the proposed Prox-SGD algorithm is a stationary point. Finally, to support the theoretical analysis and demonstrate the flexibility of Prox-SGD, we show by extensive numerical tests how Prox-SGD can be used to train either sparse or binary neural networks through an adequate selection of the regularization function and constraint set. | [Summary]
This paper proposes Prox-SGD, a theoretical framework for stochastic optimization algorithms that (1) incorporates momentum and coordinate-wise scaling as in Adam, and (2) can handle constraint and (non-smooth) regularizers through the proximal operator. With proper choices of hyperparameters, the algorithm is shown to converge asymptotically to stationarity, for smooth non-convex loss + convex constraint/regularizer. The algorithm is empirically tested on training binary and sparse neural nets on MNIST and CIFAR-10.
[Pros]
The theoretical framework that incorporates most of the commonly used tweaks in stochastic optimization for deep learning, and a convergence result that establishes broadly the asymptotic convergence to stationarity.
[Cons]
The result of Theorem 1 sounds rather a straightforward application of classical results; important sub-cases such as Adam violates the assumption (Adam has \rho_t = \rho so \sum \rho_t^2 = \infty) and thus are not contained in this case. From this it seems to me Theorem 1 says things mostly about the “easier” sub-cases, and thus is perhaps not very surprising and a bit limited in bringing in new messages.
The experiments are mostly done on simple problems --- 3 of the 4 figures are on MNIST. The specific tasks (training sparse / binary neural nets with MLP / vanilla CNN architectures) considered in the experiments are all very extensively studied in prior work, and the results in this paper says at most that the proposed Prox-SGD works for these tasks.
Overall, I like the idea in this paper that we can put together a unified framework for stochastic optimization algorithms and incorporate things like momentums and regularizations that were previously treated separately. However, beyond proposing such a framework, it seems that contributions on both the theoretical and empirical side are a bit limited at this point.
***
Thank the authors for the response and the efforts in revising the paper. I am glad to see the additional experiments for training sparse networks on CIFAR-100 (a much harder task than MNIST and also CIFAR-10) in which the proposed method works well. This largely resolved my concerns on the experimental side.
However, I'd still like to hold my evaluation on the theoretical side, in that approaches for handling constraints / non-smoothness / momentums are fairly well understood in the optimization literature. The present result (Theorem 1) conveys the message that these approaches can be combined to work (give an algorithm that converges to stationary points if it converges), but is not really a result that gives us new understandings / novel proof techniques beyond that.
I have slightly improved my rating to reflect my updated evaluation. |
iclr_2020_S1lHfxBFDH | This paper proposes a novel per-task routing method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, routing networks can be applied to learn to share each group of parameters with a different subset of tasks to better leverage tasks relatedness. However, this use of routing methods requires to address the challenge of learning the routing jointly with the parameters of a modular multi-task neural network. We propose the Gumbel-Matrix routing, a novel multi-task routing method based on the Gumbel-Softmax, that is designed to learn fine-grained parameter sharing. When applied to the Omniglot benchmark, the proposed method improves the state-of-the-art error rate by 17%. | In many ways this work is well presented. However, I have major concerns regarding the novelty of the proposed method and the theoretical rationale for the key design choices. Although the authors do cite and discuss (Rosenbaum et al., 2019), what is very much not clear to me is how the Gumbel-Matrix Routing proposed in this work differs from past work using the Gumbel Softmax within routing networks. It seems like past work even focused on using only the task for routing, so it is not clear to me how the approach here is really novel in comparison. Even if there is some distinction I am missing, the high level idea is clearly not that new. Additionally, there is not much theoretical discussion about what the Gumbel Softmax adds to routing networks.
The bias/variance tradeoff of Gumbel Softmax / RELAX / REINFORCE was already highlighted in (Rosenbaum et al., 2019). Can the performance of the model on the settings tested be attributed to this tradeoff? If so, would a RELAX model perform even better? Moreover, there is not much discussion of important implications of using the Gumbel Softmax trick in the context of routing. First, as the authors acknowledge, but don't really elaborate on, using the Gumbel Softmax means we must backprop through every possible routing choice in each layer. As a result, the Gumbel approach results in a large scaling of computation with the number of modules, limiting the applicability to more ambitious settings. Moreover, while a clear motivation of this work is eliminating interference between tasks, it is not really explained how Gumbel Softmax does this and how it compares to hard routing decisions in this respect. During backprop, the computation it very similar to mixtures of experts models, and should contain more interference than hard routing. Can you explicitly show that the shape of the Gumbel distribution results in less interference between modules during learning than the standard mixtures of experts softmax approach?
Furthermore, (Rosenbaum et al., 2019) found that a number of RL based models outperform Gumbel Softmax when routing on multi-task settings of CIFAR-100 and the Stanford Corpus of Implicatives. The authors do not provide any explanation for why this approach did not succeed in their settings. This also leads me to doubt how impressive the results presented here are as there is really not any apples to apples comparison with the same architecture and different routing decisions. In Tables 1 and 2 the best baseline is full sharing. This indicates to me that the performance difference with other cited baselines has to do with different architecture choices and not changes in the routing policy itself. The experiments can be much improved by discussing why past approaches to Gumbel based routing have failed and by thoroughly comparing to other methods for just the routing decisions with the same base architecture as done in prior work. Unfortunately, in its current form, there is not enough context provided for the community to understand the implications of the proposed approach in the submitted draft even though it achieves good performance. |
iclr_2020_BylWYC4KwH | Deep neural networks (DNNs) build high-level intelligence on low-level raw features. Understanding of this high-level intelligence can be enabled by deciphering the concepts they base their decisions on, as human-level thinking. In this paper, we study concept-based explainability for DNNs in a systematic framework. First, we define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior. Based on performance and variability motivations, we propose two definitions to quantify completeness. under degenerate conditions, our method is equivalent to Principal Component Analysis. Next, we propose a concept discovery method that considers two additional constraints to encourage the interpretability of the discovered concepts. We use game-theoretic notions to aggregate over sets to define an importance score for each discovered concept, which we call ConceptSHAP. On specifically-designed synthetic datasets and real-world text and image datasets, we validate the effectiveness of our framework in finding concepts that are complete in explaining the decision, and interpretable. | Summary
The paper proposes metrics for evaluating concept based explanations in terms of ‘completeness’ -- characterized by (1) whether the set of presented concepts if sufficient to retain the predictive performance of the original model and (2) how is performance affected when all information useful to a complete set of concepts (as per (1)) is removed from features at a specific layer. Assuming concept vectors lie in linear sub-spaces of the activations of the network at a specific layer, the underlying assumption is that if a given set of ‘concept’ vectors is complete, then using a projection of the intermediate features from input onto the sub-space spanned by concepts should not result in reduced / affected predictive performance. Based on these characterizations, the paper proposes an objective to discovering complete and interpretable set of concepts given a candidate cluster of concepts. Furthermore, the paper proposes metrics to quantify the importance of each concept (using Shapley values) and per-class importance of concepts. The authors conduct experiments on toy data, image and text classification datasets and show that their proposed approach can discover concepts that are complete and interpretable.
Strengths
- The paper is well-written and generally easy to follow. The authors do a good job of motivating the need for the completeness metric, characterizing it specifically in the case of concepts spanning sub-spaces of activations and subsequently utilizing the same to motivate an effective concept discovery method.
- The proposed approach and angle being looked at in the paper is novel in the sense that while prior work has mostly focused on characterizing concepts which are salient. Ensuring that concepts are sufficient for predictive performance ensures the fidelity of the interpretability approach.
- I like the fact that the authors decided to capture both aspects of the completeness criterion -- (1) projection to concept-space should not hurt performance and (2) how does removing concept-projected information from the features affect performance. Capturing both provides a holistic viewpoint of the features of the concerned layer -- (1) can explain features/decisions with an associated metric based on the ‘imperfect’ set of concepts and (2) captures the effectiveness of the information present in the features if we remove all concept-useful information.
- The choice of using Shapley values in ConceptSHAP as a metrics provides a whole range of desirable properties in the metrics used indicate the quality of concepts at different levels of granularity -- per-class importance of concepts across classes adding up to the overall importance of a concept. Furthermore, the observation that under certain conditions, the top-k PCA vectors maximize the defined completeness scores is interesting as well.
Weaknesses
Having said that, I’m interested to hear the thoughts of the authors on the below two points. My primary (not major) concern is the fact the proposed approach is only centered around ensuring and evaluating fidelity of the discovered concepts to the original model.
- While the proposed approach and evaluation metrics are novel, and the results generally support the claims of the paper -- toy experiments result in recovery of the ground-truth concepts, concepts identified for text and image classification offer feasible takeaways -- there is still a lack of proper human-interpretability aspect of the discovered concepts. The proposed approach to discover complete concepts mostly acts similar to a pruning approach on top of a candidate set of concepts based solely on fidelity to the original model. The obtained concepts are mostly explained via feasible hypotheses. One possible experiment that can be used to capture the reliability aspect of the discovered concepts could be as follows -- “Given the set of concepts (and representative patches) and SHAP values across all (or most relevant) classes, are humans able to predict the output of the model?” Is it possible to setup and experiment of this sort? I believe it might help understand the utility of the SHAP values in this context (beyond the advantages in terms of manipulation and characterized completeness).
- The experimental results for image classification are presented on the Animals with Attributes (AwA) dataset. AWA is a fine-grained dataset with only one class present per-image. I’m curious to what happens when the same approach is applied to datasets where images have multiple classes (and potentially distractor classes) present. Is it possible that it becomes harder to discover ‘complete’ concepts (subject to the availability of a decent approach to provide an initial set of candidate clusters). Do the authors have any thoughts on this and any potential experiments that might address this?
Reasons for rating
Beyond the above points of discussion, I don’t have major weaknesses to point out. I generally like the paper. The authors do a good job of identifying the sliver in which they make their contribution and motivate the same appropriately. The proposed evaluation metrics and discovery objectives offer several advantages and can therefore generally serve as useful quantifiers of concept-based explanation approaches. The strengths and weaknesses highlighted above form the basis of my rating. |
iclr_2020_B1eXvyHKwS | It has widely shown that adversarial training (Madry et al., 2018) is effective in defending adversarial attack empirically. However, the theoretical understanding of the difference between the solution of adversarial training and that of standard training is limited. In this paper, we characterize the solution of adversarial training for linear classification problem for a full range of adversarial radius ε. Specifically, we show that if the data themselves are "ε-strongly linearly separable", adversarial training with radius smaller than ε converges to the hard margin solution of SVM with a faster rate than standard training. If the data themselves are not "ε-strongly linearly separable", we show that adversarial training with radius ε is stable to outliers while standard training is not. Moreover, we prove that the classifier returned by adversarial training with a large radius ε has low confidence in each data point. Experiments corroborate our theoretical finding well. | TL;DR: The paper gives interesting, theoretical results to adversarial training. The paper only uses linear classifiers, which are hardly the same problem as deep networks where adversarial attacks are problematic. Some conclusions from theorems can be vague or informal, and therefore are not very convincing. I vote for rejecting this paper since it is hard to claim it informs deep learning research (the motivating reason for doing adversarial training). However, I am not familiar with theoretical analysis of adversarial attack/defense, so I am open to counter-arguments.
=================
1. What is the specific question/problem tackled by the paper?
The paper gives a theoretical analysis to the theoretically less-studied procedure of adversarial training, and shows properties of adversarial training in comparison to regular training, for both linearly separable data or inseparable data. The paper sheds light on some empirical behavior of adversarially trained networks, namely that they are more robust to outliers and lower in performance.
2. Is the approach well motivated, including being well-placed in the literature?
I am not an expert of adversarial samples, so I am ill-equipped to judge the novelty of the paper. The research direction itself is well motivated, and in the realm of deep learning, it is posed as a first paper to theoretically analyze adversarial training.
However, the authors only analyzed linear classifiers. This makes the results of the paper ill-suited for deep networks, whose non-linearity is arguably the reason why adversarial samples are such a problem. The motivation of the paper is thus greatly diminished. For linear classifiers, I do not know if there are existing work on their robustness when perturbation of samples are being trained on, but to be well-placed in the literature, the authors must either claim there is none, or cite those papers.
3. Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.
Claims and novelties in this paper include:
(1) Adversarial training converges faster than regular training if samples are ε-strongly linearly separable,
(2) If samples are not "ε-strongly linearly separable", adversarial training is robust to outliers, while regular training is not,
(3) Confidence is low for all (training) samples if ε is large.
Only (1) seems to be sufficiently proved. I am not certain that this is a very useful result, and I am open to counter-arguments.
(2) and (3) have steps that are vague and informal:
(2) That regular training is susceptible to outliers is proof by example and people already know that. Also the claim relies on the assumption that pi^1 and pj^2 are on the same scale, while those samples that violate the decision boundary can have arbitrarily large pi^1 or pj^2. Since outliers are often the violators, and inliers often are not, a small number of carefully placed outliers can make ||p2|| quite large while each pi^1 can be very small. It is also worth noting the logic seems to boil down to "N1>>N2, so inliers should overwhelm outliers, making the training robust". The claim is not guaranteed.
(3) I am not very sure if I understand it correctly, but the logic seems to be that the logits are bounded, so they cannot be too large, and so the confidence is low. However, the bound also involves the magnitude of w and x from the other class, so the final step of the proof is either unclear or the bound can indeed be quite large. Note that |wx| does not need to be very large for the confidence to be high (e.g. if logit is 5, the confidence is 1/(1+e^-5)=99.3%). The claim also relies on the assumption that epsilon is larger than the distance between the farthest points in the dataset, which is extreme since you can find an adversarial sample that can be considered to be simultaneously "close enough" to the two most dissimilar samples in the dataset.
In the end, the results are not very convincing or useful for informing deep learning research.
=============
To improve paper:
- Clarify motivation and how this would inform adversarial training highly non-linear classifiers;
- Add related work for robustness to perturbation of linear models, or state that they don't exist;
- Clarify weaknesses in the claims.
Editorial changes:
Definition 2: "logit" <--- people call this probability estimates; logits are wTx
Sec. 4.1.2 needs to clarify what k in x_i^k means -- it's continued from proposition 1 which at first glance is irrelevant
=================
Post rebuttal
Apologies for not interacting earlier due to deadlines.
The rebuttal does not address my major concern (motivation), nor does it discuss its relationship with related work. The math questions are not answered very clearly; I do not see how section 4.1.2 proves p_i^1 has the same scale as p_j^2, except maybe that they are all smaller than some constant for certain samples. And other discussions in the rebuttal simply confirms my concern.
In summary, I think this paper is not yet ready for publication. |
iclr_2020_rylDzTEKwr | Hashing-based collaborative filtering learns binary vector representations (hash codes) of users and items, such that recommendations can be computed very efficiently using the Hamming distance, which is simply the sum of differing bits between two hash codes. A problem with hashing-based collaborative filtering using the Hamming distance, is that each bit is equally weighted in the distance computation, but in practice some bits might encode more important properties than other bits, where the importance depends on the user. To this end, we propose an end-to-end trainable variational hashing-based collaborative filtering approach that uses the novel concept of self-masking: the user hash code acts as a mask on the items (using the Boolean AND operation), such that it learns to encode which bits are important to the user, rather than the user's preference towards the underlying item property that the bits represent. This allows a binary user-level importance weighting of each item without the need to store additional weights for each user. We experimentally evaluate our approach against state-of-the-art baselines on 4 datasets, and obtain significant gains of up to 12% in NDCG. We also make available an efficient implementation of self-masking, which experimentally yields <4% runtime overhead compared to the standard Hamming distance. | The author proposes a variational encoder (VAE) based approach to perform hashing-based collaborative filtering, which focuses on the weighting problem of each hash-code bit by applying the "self-masking" technique. This proposed technique modifies the encoded information in hash codes and avoids the additional storage requirements and floating point computation, while preserving the efficiency of bit manipulations thanks to hardware-based acceleration. An end-to-end training is achieved by resorting the well-known discrete gradient estimator, straight-through (ST) estimator.
Strength:
The idea of employing the discrete VAE framework to perform hashing-based collaborative filtering is interesting. From the perspective of applications, I think it is somewhat novel.
The experimental results demonstrate the performance superiorities of the self-masking hashing-based collaborative filtering method.
Weakness:
I have concerns over using the VAE to model the ratings of each user and item pairs here. Essentially, you are building a VAE for every rating value R_u, i. Although the parameters are shared, the VAE’s have their own prior. For generative models like VAE, they need to learn from a lot of data, not just one data point. From the perspective of generating data looking similar to the training data, I don’t think your model have learned anything. Only the reconstruction part is important to your model.
The idea of using discrete VAE for hashing tasks has been explored before, see “NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing”. It employed a very similar idea, although it is not used for CF, but for hashing directly. The novelties of the model is limited.
Since ratings are essentially ordinal data, using Gaussian distribution to model the rating data may be not appropriate.
The whole paper, especially the model, is not presented well. The model is not presented in a rigorous way, and some sentences in this paper are difficult to follow.
The runtime analysis is not sufficient. In addition to comparing with the methods using hamming distance, we also want to see the advantages of the hamming based method over the real-value based method on speed acceleration.
Other question:
To realize the self-masking role, the paper proposed to use the function f(z_u, z_i)=g(Hamming_self-mask(z_u, z_i)), Equ. 12, and demonstrate the effectiveness of using this function by experiments. Since the necessity of using self-masking is not very convincing, I doubt whether this self-masking function f(z_u, z_i) is indispensable. Maybe, if some other functions that takes z_u and z_i as input are used, better results may be observed. |
iclr_2020_S1xsG0VYvB | One of the most fundamental organizational principles of the brain is the separation of excitatory (E) and inhibitory (I) neurons. In addition to their opposing effects on post-synaptic neurons, E and I cells tend to differ in their selectivity and connectivity. Although many such differences have been characterized experimentally, it is not clear why they exist in the first place. We studied this question in deep networks equipped with E and I cells. We found that salient distinctions between E and I neurons emerge across various deep convolutional recurrent networks trained to perform standard object classification tasks, and explored the necessary conditions for the networks to develop distinct selectivity and connectivity across cell types. We found that neurons that project to higher-order areas will have greater stimulus selectivity, regardless of whether they are excitatory or not. Sparser connectivity is required for higher selectivity, but only when the recurrent connections are excitatory. These findings demonstrate that the functional and structural differences observed across E and I neurons are not independent, and can be explained using a smaller number of factors.
Deep neural networks have become powerful tools to model the brain (Yamins & DiCarlo, 2016;Kriegeskorte, 2015). They have been used to successfully model various aspects of the sensory (Yamins et al., 2014;Kell et al., 2018), cognitive (Mante et al., 2013;Wang et al., 2018;Yang et al., 2019), and motor system (Sussillo et al., 2015). Deep networks have been particularly effective at capturing neural representations in higher-order visual areas that remain challenging to model with other methods (Yamins et al., 2014;Khaligh-Razavi & Kriegeskorte, 2014).
Demonstrating that these models can mimic the brain in some respects is only a starting point in the process of advancing neuroscience with deep learning. Once some relationship between artificial and biological networks is established, the model can be dissected to better understand how certain biological computations are mechanically implemented (Sussillo & Barak, 2013). Although notoriously difficult to interpret, deep networks are still much more accessible than the brain itself. This is how deep networks can help address the "how" question. On top of that, deep networks can be used in a normative approach to answer the "why" question. Suppose we found that a certain architecture, objective function (dataset), and training algorithm could together lead to a neural network matching some features of the brain. Then we can ask why the brain evolved such features by testing which element of the architecture, dataset, and training is essential for such features to evolve in the artificial networks. For example, Lindsey et al. (2019) showed early layers of a convolutional neural network can recapitulate the center-surround receptive field observed in retina, but only when followed by an information bottleneck similar to the one that exists from retina to cortex. This finding demonstrates how realistic biological constraints can be used to explain the emergence of known properties of the brain.
Despite the many successes of applying deep networks to model the brain, standard architectures deviate from the brain in many important ways. Recently, neural networks that incorporate fundamental structures of the brain are becoming increasingly common (Kar et al., 2019;Miconi et al., 2018;Song et al., 2016). One fundamental structural principle of the brain is the abundance of recurrent connections within any cortical area. Cortical neurons receive a substantial proportion of their inputs from other neurons in the same area (Harris & Shepherd, 2015). Convolutional recurrent networks trained on object classification tasks can perform comparably to purely feedforward networks with similar numbers of parameters, while being able to better explain temporal responses in higher visual areas (Nayebi et al., 2018;Kar et al., 2019).
One of the most fundamental organizational principles of the cortex is the separation of excitatory (E) and inhibitory (I) neurons (Dale, 1935;Eccles et al., 1954). Dale's law states that each neuron expresses a fixed set of neurotransmitters, which results in either excitatory or inhibitory downstream effects. In addition to the difference in their immediate impacts on post-synaptic neurons, E and I neurons differ in several other important ways. There are several times (4-10x) more E neurons than I neurons (Hendry et al., 1987). Neurons that project to other areas, the so-called "principal neurons", are all excitatory in the cortex (Bear et al., 2007). In the mammalian sensory cortex, E neurons are overall more selective to stimuli than I neurons in the same area as extensively reported in mice (Kerlin et al., 2010;Znamenskiy et al., 2018) and to a lesser extent in other species (Wilson et al., 2017). Finally, E neurons are more sparsely connected with each other, with a connection probability of around 10% (Harris & Shepherd, 2015), compared to I neurons, which target almost all neighboring E neurons (Pfeffer et al., 2013).
Although many of these differences across E and I neurons are well-known and routinely incorporated in computational models (Somers et al., 1995;McLaughlin et al., 2000), it remains unclear whether they can emerge in deep neural networks from training. It is also unknown whether these differences are all independent properties, each contributing to the computation, or if some differences are natural results of the others. To answer these questions, we first trained many variants of deep convolutional recurrent neural networks on classical object classification tasks. Across all variants, three structural principles are built in: Dale's law, the abundance of excitatory neurons, and the role of excitatory units as principal neurons. However, other observed properties of biological E-I circuits are not hardwired, and therefore could only evolve under the pressure of performing the task. We found that functional and structural differences across E and I neurons emerged through training. The development of these characteristics allows us to address the "why" question by removing the built-in differences across E and I neurons and monitoring whether specific functional and structural properties still emerge. Our anonymized code is available at https://anonymous.4open.science/repository/d0ae905f-4171-42b0-94b5-abf03d6414aa. | [Update after rebuttal period]
I would like to thank the authors for the detailed response and for addressing the concerns on such a tight schedule.
Overall my two concerns appear to have been right: 1. The object classification task is not really relevant to elicit the observed behavior and 2. Inhibitory neurons are not essential (at least when training with batch norm).
Do the conclusions of the paper about understanding of E/I cells in the brain still hold? I am not really sure. I would like to urge the authors to carefully assess their conclusions again in light of the new evidence and perhaps rephrase abstract, intro and discussion where necessary.
Having said that, I still like the paper and its approach. I think it provides valuable insights to the field (e.g. neuro-inspired architecture design, concerns regarding training objectives, etc.) and important steps in the right direction and could be a valuable contribution even if the original claims are not entirely substantiated.
[Original review]
This paper seeks to understand why excitatory neurons in cortex are more stimulus selective than inhibitory cells and why excitatory cells are more sparsely connected to each other. The authors train a recurrent neural network model with a number of biological constraints on CIFAR-10. These constraints include Dale's law, an unequal number of E vs. I cells and only excitatory connections between layers (areas) of the network. I would summarize the main findings as follows:
- Excitatory principal neurons are more selective and more sparsely connected to each other than local inhibitory neurons, consistent with biology
- Stimulus selectivity and performance depend only on the number of excitatory cells, but neither on the E/I ratio nor the number of I cells
- High selectivity appears to be a general property of principal cells, but does not depend on the sign of local connections, while sparse connectivity is mainly linked to the excitatory local connections
Strengths:
+ Architectural decisions are well motivated from a neuroscientific perspective
+ Provides a hypothesis why certain connectivity and selectivity patterns emerge in the brain by incorporating biological knowledge into neural network models trained on a specific task
+ Well written and clear, logic development of the arguments
Weaknesses:
- Unclear whether classification task is necessary to elicit the authors' observations
- Interneurons seem unnecessary, raising concerns about relevance of results
- Also trains on ImageNet, but only some results are shown; most from CIFAR
Overall I like the paper from the perspective of a neuroscientist, as it provides a kind of normative account of why things in the brain might look the way they do. My enthusiasm is somewhat limited, however, because I am not convinced the classification task is actually what drives the authors' observations. I am willing to increase my score and argue more strongly for the paper if the authors can address this concern, detailed in the following:
A few observations lead me to believe the classification task the networks are trained may not be important to elicit the observed behavior.
1.) The emergent properties in stimulus selectivity show up almost immediately after training starts according to Fig. 5A+B. Performance, on the other hand, takes a few more epochs to pick up according to Fig. 10.
2.) The connectivity preferences that emerge in Figure 5C appear at already 50 epochs where CIFAR-10 performance is at 50%. To put that into perspective, a linear SVM already achieves 40% on this task, and a one-layer multilayer perceptron with only 100 hidden neurons reaches that performance.
3.) The results of Fig. 11 suggest that the number of inhibitory channels is not important. Did the authors try a stripped down version with only excitatory cells?
I suggest the authors train on CIFAR-10 with shuffled labels on the training set [Zhang et al. 2016; https://arxiv.org/abs/1611.03530]. Although performance on the test set would be at chance (no object recognition capabilities), it would be interesting to see whether the selectivity and connectivity properties still emerge.
Finally, (related to 3.) I wonder whether it makes sense to draw conclusions about differences between E and I cells from a model trained on a task where the number of inhibitory cells seems irrelevant. Wouldn't one need to have at least both types of neurons to be required for the task in order to draw such conclusions?
Minor Comments:
- I sometimes found the nomenclature of the multiple models you tried a bit confusing and hard to follow — especially in parts of the Appendix
- The type of operations (convolution, element-wise) in equation 1, together with the meaning of nonlinearities like \sigma_c are only defined one page after, making that section a bit hard to follow. |
iclr_2020_H1gdF34FvS | In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters. (Video 1 ) | This paper proposes an off-policy reinforcement learning method in which the model parameters have been updated using the regression style loss function. Specifically, this method uses two regression update steps: one update value function and another one update policy using weighted regression. To compare the proposed method with others [main comprasion], 6 MuJoCo tasks are used for continuous control and LunarLander-v2 for discrete space.
-- Even though this paper has done a good job in terms of running different experiments, the selection of some of the benchmarks seems arbitrary. For example, for discrete action space, this paper uses LunarLander which is rarely used in any papers so it makes very difficult to draw a conclusion based on these results. Common 49 Atari-2600 games should have been used for comparison. The same thing about experiments in section 5.3 is true too as those tasks are not that well-known.
-- The proposed method doesn't outperform previous off-policy methods on Mujoco task (Table 1). Since the main claim of this paper is a new off-policy method, outperforming the previous off-policy methods is a fair game. The current results are not convincing enough.
-- There are significant overlaps between this paper, "Fitted Q-iteration by Advantage Weighted Regression", "Model-Free Preference-Based Reinforcement Learning ", and "Reinforcement learning by reward-weighted regression for operational space control" which makes the contribution of this paper very incremental.
-- The authors used only 5 seeds to run Mujoco experiments. Given the sensitivity of Mujoco for different starting points, the experiments should have been run at least with 10 different seeds.
Questions:
1) Shouldn't be an importance sampling ratio between \pi and \mu in the equations? starting from eq.5.
2) Does the algorithm optimize the respect to $w$ as well? (eq. 15) if yes, why it is not mentioned in algorithm 1? Plus, since $d(s)$ is uniform dist. (at least this is assumed for implementation), eq. 14,15,and 16 (wherever there is d(s)), those can be simplified, e.g. \hat{V} = \sum(w_i V_i), wouldn't be better just introduced simplified version rather than current ones? (referring to the only equation above section 3.3)
3) Is this the same code used to report results in this paper? if yes, I didn't see any seed assignment in the code?! and what is "action_std" in the code?
There are a couple of recent works in merging on-policy with off-policy updates which you might want to cite them. |
iclr_2020_H1lK5kBKvr | Our network decomposes an input image into shape, lighting, and albedo with four disentangled representations: identity, expression, pose, and lighting, which allows expression transfer between different face images.
Recovering 3D geometry shape, albedo and lighting from a single image has wide applications in many areas, which is also a typical ill-posed problem. In order to eliminate the ambiguity, face prior knowledge like linear 3D morphable models (3DMM) learned from limited scan data are often adopted to the reconstruction process. However, methods based on linear parametric models cannot generalize well for facial images in the wild with various ages, ethnicity, expressions, poses, and lightings. Recent methods aim to learn a nonlinear parametric model using convolutional neural networks (CNN) to regress the face shape and texture directly. However, the models were only trained on a dataset that is generated from a linear 3DMM. Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications. In this paper, we train our model with adversarial loss in a semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections. A novel center loss is introduced to make sure that different facial images from the same person have the same identity shape and albedo. Besides, our proposed model disentangles identity, expression, pose, and lighting representations, which improves the overall reconstruction performance and facilitates facial editing applications, e.g., expression transfer. Comprehensive experiments demonstrate that our model produces high-quality reconstruction compared to state-of-the-art methods and is robust to various expression, pose, and lighting conditions. | Overview:
This paper introduces a model for image-based facial 3D reconstruction. The proposed model is an encoder-decoder architecture that is trained in semi-supervised way to map images to sets of vectors representing identity (which encodes albedo and geometry), pose, expression and lighting. The encoder is a standard image CNN, whereas the decoders for geometry and albedo rely on spectral graph CNNs (similar to e.g. COMA, Ranjan’18).
The main contribution of the work with respect to the existing methods is the use of additional loss terms that enable semi-supervised training and learning somewhat more disentangled representations. Authors report quantitative results on MICC Florence, with marginal improvements over the baselines (the choice of the baselines is reasonable).
Decision:
The overall architecture is very similar to existing works such as COMA (Ranjan’18) and (Tran’19), including the specific architecture for geometry decoders, and thus the contributions are primarily in the newly added loss terms.
I also find the promise of “disentangled” representation a bit over-stated, as the albedo and base geometry still seem to be encoded in the same “identity” vector (see related question below).
The numerical improvements seem fairly modest with respect to (Tran’19). In addition, there is no numerical ablation study that would demonstrate the actual utility of the main contributions (such as adversarial loss): there are qualitative results but they are not very convincing.
Thus, the final rating “weak reject”.
Additional comments / typos:
* I am not fully following the argument about sharing identity for albedo and shape on p2: “albedo and face shape are decoded ...”. Would it not be more beneficial to have a fully decoupled representation between the albedo and the facial geometry? I do not see how albedo information would be useful for encoding face geometry and vise-versa.
* Authors claim that one of the main drawbacks e.g. of (Train’19) is the fact that they train on data generated from linear 3DMM. This is indeed the case, but it does not seem like here the authors fully overcome this issue: they do have additional weakly-supervised data, but they still strongly rely on linear 3DMM supervision (p6, “pairwise shape loss”, “adversarial loss”), and do not seem to provide experimental evidence that the model will work without it.
* In particular, the “adversarial training” actually corresponds to learning the distribution of the linear 3DMM. Would it not mean that ultimately the model will be limited to learning only linear ? Could you please elaborate on this?
* p3: “allows ene-to-end … training”
* p3: “framework to exact … representations“.
* p8: “evaluation matric”
Update:
Authors did not provide any response, thus I keep my rating. |
iclr_2020_SklgfkSFPH | We investigate PAC-Bayes bounds for deep neural networks through the closely related IB-Lagrangian objective. We first propose to approximate the IBLagrangian through a second order Taylor expansion of the randomized loss around the minimum. For the case of Gaussian priors and posteriors with fixed means and diagonal covariance, we are able to derive a lower bound to this approximation that corresponds to an "invalid" PAC-Bayes prior (a prior that is training set dependent). Our lower bound depends only on the flatness of the minimum, and the distance between the prior and posterior means. Through a number of experiments our lower bound implies easy and and hard cases where one can or cannot prove generalization even through "cheating". Motivated by this result, we see that for the easy cases, a simple baseline closely matches our lower bound and is sufficient to achieve a non-vacuous PAC-Bayes bound. Crucially the baseline prior is centered on the random deep neural network initialization. This suggests that a good prior mean choice is the main innovation in recent non-vacuous bounds with further optimization of the PAC-Bayes bound through SGD having a complementary role. | The authors replace the empirical risk term in a PAC-Bayes bound by its second-order Taylor series approximation, obtaining an approximate (?) PAC-Bayes bound that depends on the Hessian. Note that the bound is likely overoptimistic unless the minimum is quadratic. They purpose to study SGD by centering the posterior at the weights learned by SGD. The posterior variance that minimizes this approximate PAC-Bayes bound can then be found analytically. They also solve for the optimal prior variance (assuming diagonal Gaussian priors/posteriors), producing a hypothetical "best possible bound" (at least under the particular choices of priors/posteriors, and under this approximation of the empirical risk term). The authors evaluate their approximate bound and "best bound possible" empirically on MNIST and CIFAR. This requires computing approximations of the Hessian for small fully connected neural networks trained on MNIST and CIFAR10. There are some nice visualizations (indeed, these may be one of the most interesting contributions.)
The direction taken by the authors is potentially interesting. However, there are a few issues that would have to be addressed carefully for me to recommend acceptance. First, the comparison to (some very) related work is insufficient, and so the actual novelty is misrepresented (see detailed comments below). Further, the paper is full of questionable vague claims and miscites/attributes other work. At the moment, I think the paper is below the acceptance threshold: the authors need to read and understand (!) related work, and expand their theoretical and/or empirical results to produce a contribution of sufficient novelty/impact.
DETAILED FEEDBACK.
I believe the authors missed some related work by Tsuzuki, Sato and Sugiyama (2019), where a PAC-Bayes bound was derived in terms of the Hessian, via a second-order approximation. How are the results presented in this submission relate to Tsuzuki et al approach?
When the posterior, Q, is a Gaussian (or any other symmetric distribution), \eta^T H \eta is the so-called Skilling-Hutchinson trace estimator. Thus E(\eta^T H \eta) is the Trace(H) scaled by the variance of \eta. The authors seem to have completely missed this connection, which simplifies the final expression considerably.
Why is the assumption that the higher order terms are negligible reasonable? Citation or experiments required.
Regarding the off-diagonal Hessian approximation: how does the proposed layer-wise approximation relate to k-FAC (Martens and Grosse 2015)?
IB Lagrangian: I am not sure why the authors state the result in Thm 4.2 as a lower bound on the IB Lagrangian. What’s the significance of having a lower bound on IB Lagrangian?
Other comments:
Introduction: “At the same time neither the non-convex optimization problem solved in .. nor the compression schemes employed in … are guaranteed to converge to a global minimum.”. This is true but it is really not clear what the point being made is. Essentially, so what? Note that PAC-Bayes bounds hold for all posteriors, even ones not centered at the global minimum (of any objective). The claims made in the rest of the paragraph are also questionable and their purposes are equally unclear. I would be grateful if the authors could clarify.
First sentence of Section 3.1: “As the analytical solution for the KL term in 1 obviously underestimates the noise robustness of the deep neural network around the minimum...”. I have no idea what is being claimed here. The statement needs to be made much less vague. Please explain.
Section 4: “..while we will be minimizing an upper bound on our objective we will be referring with a slight abuse of terminology to our results as a lower bound.”. I would appreciate if the authors could clarify what they mean here.
Section 4.1 beginning: “We make the following model assumptions...”. Choosing a Gaussian prior and posterior is not an assumption. It's simply a choice. The PAC-Bayes bound is valid for any choices of Gibbs classifiers. On the other hand, it is an assumption that such distributions will yield "tight" bounds, related to the work of Alquier et al.
Section 4.1 “In practice we perform a grid search over the parameters..”. The authors should mention that such a search should be accounted for via a union bound (or otherwise). The "cost" of such a union bound should be discussed.
The empirical risk of Q is computed using 5 MCMC samples. This seems like a very low number, as it would not even give you one decimal point of accuracy with reasonable confidence! The authors should either use more samples, or account for the error in the upper bound using a confidence interval derived from a Chernoff bound.
Section 4.2: “The concept of a valid prior has been formalized under the differential privacy setting...”. I am not sure what the authors mean by that.
Section 5: “There is ambiguity about the size of the Hessians that can be computed exactly.” What kind of ambiguity?
Same paragraph in Section 5 discusses why there are few articles on Hessian computation. The authors claim that “the main problem seems to be that the relevant computations are not well supported...”. This is followed by another comment that is supposed to contrast the previous claim, saying that storing the Hessian is infeasible due to memory requirements. I am not sure how this claim about memory requirements shows a contrast with the claim on computation not being supported.
First sentence in Section 5.1: I believe this is only true under some conditions.
Section 5.1: The authors should explain why they add a damping term, alpha, to the Hessian, and how alpha affects the results.
***
Additional citation issues:
The connections between variational inference, PAC-Bayes and IB Lagrangian have been pointed out in previous work (e.g. Germain, Bach, Lacoste, Lacoste-Julien (2016); Achille and Soatto 2017).
In the introduction, the authors say “...have been motivated simply by empirical correlations with generalization error; an argument which has been criticized …” (followed by a few citations). Note, that this was first criticized in Dziugaite and Roy (2017).
“Both objectives in … are however difficult to optimize for anything but small scale experiments.”. It seems peculiar to highlight this, since the approach that the authors are presenting is actually more computationally demanding.
Citations for MNIST and CIFAR10 are missing.
***
Minor:
Theorem 3.1 “For any data distribution over..”, I think it was meant to be \mathcal{X} \times (and not \in )
Theorem 4.2: “For our choice of Gaussian prior and posterior, the following is a lower bound on the IB-Lagrangian under any Gaussian prior covariance”. I assume only the mean of the Gaussian prior is fixed.
Citations are misplaced (breaking the sentences, unclear when the paper of the authors are cited).
There are many (!) missing commas, which makes some sentences hard to follow.
***
Positive feedback: I thought the visualizations in Figure 2 and 3 were quite nice. |
iclr_2020_BylB4kBtwB | Recent advances have made it possible to create deep complex-valued neural networks. Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems. Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain. As a case study, we perform audio source separation in the Fourier domain. Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation. We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram. Using the Wall Street Journal Dataset, we compare our phaseaware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss. When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters. Our proposed mechanism improves significantly deep complex-valued networks' performance and we demonstrate the usefulness of its regularizing effect. | This paper proposes a new method for source separation, by using deep learning UNets, complex-valued representations and the Fourier domain. Concretely, their contribution is : i) a complex-valued convolutional version of the Feature-Wise Linear Modulation, able to optimise the parameters needed to create multiple separated candidates for each of signal sources that are then combined using signal averaging; ii) the design of a loss that takes into account magnitude and phase while being scale and time invariant. It was then tested and compared with real-valued versions, and also some state-of-the-art methods.
Overall, I think this paper is of good quality and proposes an interesting method for this crucial task of source separation. However, I found the paper too dense and difficult to read (even if well written), and it looks like a re-submission from a journal paper of more than 8 pages. I would suggest the authors to shrink the paper so it *really* fits into the 8-pages (without important figures or important implementation details in the appendices), maybe at the cost of leaving some parts (such as old related works) out of the paper. The experiments are important here, and it is too bad that the comparison with state-of-the-art is just in the last paragraph, while the results do not seem to show any improvements compared to the other methods. The computational time might be very important here, as the claim is that FFT reduces time computation, but I did not had time to go through all the appendices.
Positive aspects:
- The work is well documented and motivated, and I found that the reflexion leading to the method is of good quality.
- Concretely, I found interesting the use of the FiLM, originally designed for another application, for minimizing the SNR of the signal sources. The motivation/proof is quite clear too.
- Equally, the motivation for the design of the new loss is clear and interesting.
- I also found important the experiments, that shows in the same table the difference between the method without the complex-valued part and with different parameter values.
Questions and remarks:
- I have to recall that I am not an expert on source separation and complex-valued deep learning. Yet, I have had difficulties in understanding the structure of the method, even if the different parts were clearly explained. The figure 1 is very useful, but I found it not clear enough and too small. I went to find some informations in the appendices, but there are too much crucial information there and I did not have time to go through all of it.
- The use of the U-net architecture is not explained (just some citations are given). What is supposed to be the output of it?
- When you say 'to be more rigorous, we can assume in the cse of speech separation that, for each speaker, there exists an impulse response such that when it is convolved with the clean speech of the speaker, it allows to reconstruct the mix' : why can we be sure that it is always possible, and why is it more rigourous?
- Why are the additive noises epsilon_i supposed to have the same E(|epsilon_i|^2|) ? Even if they are uncorrelated, what is the hypothesis behind that?
- In the CSimLoss, why (i) is the real part negative and the imaginary part positive; (ii) is the imaginary part squared?
- have you tested with a higher lambda_imag (as the larger is now the best)?
- It looks from the end of the paper that the method is still not achieving better results than the state-of-the-art. I agree with the authors as it might not be the scope of the paper, but then what is it? If it's time computation, it is not shown in the paper. If it is just a methodology, what would be required in the future to beat the best method?
- In the results, table 1, in the last 4 lines: it looks from 1st and 2nd line that the new loss CSimLoss is not very different from the L2 (9.88 compared to 9.87). The best result, in the 4th line, cannot be compared to the 3rd line as both the loss and the number of transforms are different. I then found those values in the appendices, but it would be best to show fewer parameters varying in the main paper, but show some results that can be easily compared.
- What is the importance of the first paragraph in 2.1? I was not aware of the holographic reduced representations, but I don't understand it more now, and I don't see why explaining that for 15 lines.
Small remarks:
- 'deep complex valued models have *just* started to gain momentum'... with citations beginning in 2014, I would not say 'just'.
- 'in the frequncy domain is then, ...' --> frequency + no coma
- Figure 2 is in the appendix, while in the text it is not said so. I was lost. This figure should not be in the appendix as the appendix should not have key elements, but just details that are not important for the understanding of the paper. |
iclr_2020_ByxT7TNFvH | Self-supervised learning is showing great promise for monocular depth estimation, using geometry as the only source of supervision. Depth networks are indeed capable of learning representations that relate visual appearance to 3D properties by implicitly leveraging category-level patterns. In this work we investigate how to leverage more directly this semantic structure to guide geometric representation learning, while remaining in the self-supervised regime. Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions. Furthermore, we propose a two-stage training process to overcome a common semantic bias on dynamic objects via resampling. Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories. | This work proposes to leverage a pre-trained semantic segmentation network to learn semantically adaptive filters for self-supervised monocular depth estimation. Additionally, a simple two-stage training heuristic is proposed to improve depth estimation performance for dynamic objects that move in a way that induces small apparent motion and thus are projected to infinite depth values when used in an SfM-based supervision framework. Experimental results are shown on the KITTI benchmark, where the approach improves upon the state-of-the-art.
Overview:
+ Good results
+ Doesn't require semantic segmentation ground truth in the monodepth training set
- Not clear if semantic segmentation is needed
- Specific to street scenes
- Experiments only on KITTI
The qualitative results look great and the experiments show that semantic guidance improves quantitative performance by a non-trivial factor. The qualitative results suggest that the results produced with semantic guidance are sharper and more detailed. However, it is not clear that using features from a pre-trained semantic segmentation network is necessary. The proposed technical approach is to use the pixel-adaptive convolutions by Su et. al. to learn content-adaptive filters that are conditioned on the features of the pre-trained semantic segmentation network. These filters could in principle be directly learned from the input images, without needing to first train a semantic segmentation network. The original work by Su et. al. achieved higher detail compared to their baseline by just training the guidance network jointly. Alternatively, the guidance network could in principle be pre-trained for any other task. The main advantage of the proposed scheme is that the guidance path doesn't need to be trained together with the depth network. On the other hand, unless shown otherwise, we have to assume that the network needs to be pre-trained on some data that is sufficiently close to the indented application domain. This would limit the approach to situations where a reasonable pre-trained semantic segmentation network is available.
The proposed heuristic to filter some dynamic objects is very specific to street scenes and to some degree even to the KITTI dataset. It requires a dominant ground plane and is only able to detect a small subset of dynamic motion (e.g. apparent motion close to zero and object below the horizon). It is also not clear what the actual impact of this procedure is. Section 5.4.2 mentions that Abs. Rel decreases from 0.121 to 0.119, but it is not clear to what this needs to be compared to as there is no baseline in any of the other tables with an Abs. Rel of 0.121. Additionally, while the authors call this a minor decrease, the order of magnitude is comparable to the decrease in error that this method shows over the state-of-the-art (which the authors call statistically significant) and also over the baselines (c.f. Table 2). Can the authors clarify this?
Related to being specific to street scenes: The paper shows experiments only on the KITTI dataset. The apparent requirement to have a reasonable semantic segmentation model available, make it important to evaluate also in other settings (for example on an indoor dataset like NYU) to show that the approach works beyond street scenes (which is one of the in practice not so interesting settings for monocular depth estimation since it is rather easy to just equip cars with additional cameras to solve the depth estimation problem).
Need for a reasonable segmentation model: It is not clear in how far the quality of the segmentation network impacts the quality for the depth estimation task. What about the domain shift where the segmentation model doesn't do so well? Even if the segmentation result is not used directly, the features will still shift. How much would depth performance suffer?
Summary:
While the results look good on a single dataset, I have doubts both about the generality of the proposed approach as well as the need for the specific technical contribution.
=== Post rebuttal update ===
The authors have addressed many of my initial concerns and provided valuable additional experimental evaluations. While I'd like to upgrade my recommendation to weak accept, I strongly encourage the authors to provide additional experiments on different datasets (at least NYU). |
iclr_2020_r1lZ7AEKvB | The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC 2 , a well-studied fragment of first order logic. FOC 2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC 2 classifiers, and provide a syntactic characterization of the largest subclass of FOC 2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC 2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC 2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training. | The paper utilizes recent insights into the relationship between the Weisfeiler-Lehman (WL) test for checking graph isomorphism and Graph Neural Networks (GNNs) in order to characterize the class of node classifiers that can be captured by a specific GNN architecture, called aggregate-combine GNN (AC-GNN).
The primary contribution of this work is the identification of the logical classifiers that can be represented within an AC-GNN, a fragment of first order logic called graded modal logic, as well as an extention of AC-GNNs with the ability to capture a strictly more expressive fragment of first order logic, called ACR-GNN. Both of these results are supported by formal proofs and derivations, adding not only theoretical value to the presented work, but also intuitive insights behind the reasons of the AC-GNN limitations. In addition, the presented experiments demonstrate the practical implications of the results, with the exception of some datasets where no significance difference in performance between AC-GNNs and ACR-GNNs was found.
I believe that the contributions of this paper are significant. On the one hand, studying theoretical properties of GNNs facilitates their transparency and highlights the range of applications they can be used in. On the other hand, although the GNN variant introduced by the author is a special case of an existing class of GNNs, the motivation behind its introduction is different and follows naturally from the discussion within the paper.
The fact that no actual difference in performance between AC-GNNs and ACR-GNNs was noticed in the only non-synthetic dataset used in the experiment should prompt the author to run experiments with more real life datasets, in order to empirically verify the results, but this is a minor point. |
iclr_2020_H1l-02VKPB | Pooling operations have shown to be effective on various tasks in computer vision and natural language processing. One challenge of performing pooling operations on graph data is the lack of locality that is not well-defined on graphs. Previous studies used global ranking methods to sample some of the important nodes, but most of them are not able to incorporate graph topology information in computing ranking scores. In this work, we propose the topology-aware pooling (TAP) layer that uses attention operators to generate ranking scores for each node by attending each node to its neighboring nodes. The ranking scores are generated locally while the selection is performed globally, which enables the pooling operation to consider topology information. To encourage better graph connectivity in the sampled graph, we propose to add a graph connectivity term to the computation of ranking scores in the TAP layer. Based on our TAP layer, we develop a network on graph data, known as the topology-aware pooling network. Experimental results on graph classification tasks demonstrate that our methods achieve consistently better performance than previous models. | This paper presented a new pooling method for learning graph-level embeddings. The key idea is to use the initial node attributes to compute all-pair attention scores for each node pair and then use these attention scores to formulate a new graph adjacency matrix beyond the original raw graph adjacency matrix. As the result, each node can average these attention score edges to compute the overall importance. Based on these scores, the method chooses top-k nodes to perform graph coarsening operation. In addition, a graph connectivity term is proposed to address the problems of isolated nodes. Experiments are performed to validate the effectiveness of the proposed method.
Although I found this paper is generally well written and well motivated, there are several concerns about the novelty of paper, computational expenses, and the experimental results as listed below:
1) The idea of using attention score is simple yet seems effective. But using attention score to identify the importance of nodes has been first presented in GAT [Velicˇkovic ́ et al., 2019] and further studied by a lot of subsequent works.
2) In this paper, authors considered using raw node features to obtain good attention scores between each node pair. However, the effectiveness of this will heavily depend on how informative these original node features are. In the extreme case, the original node features are completely random, then the proposed method will definitely fail since there are no meaningful similarity scores that can be learnt from. Surprisingly, I did not see any discussion or any ablation study on this aspect. Also, why only using initial node attributes? How about the computed node embeddings to compute the similarity scores?
3) The pair-wise similarity scores (similarity or attention matrix) are very expensive to compute and score, which easily renders quadratic complexity in terms of the number of nodes O(N^2) for both computation and memory. This makes it scale to really large graph. In this sense, the proposed scheme is not promising in real applications.
4) The experiments are really problematic. There are no descriptions on how the graph data are split in train/val/test. Following the traditional graph kernel settings, it should be 9/1/1. Also, there are no information about how many runs are performed on each dataset. I noticed that authors basically collected all performance results from published related works except for several baselines. Different data split and number of runs could result quite different performance report, leading to apple-and-orange comparisons.
5) The experimental results are also problematic as well. For example, WL baseline has extremely large variance for all of datasets, which are completely different from the reported number from other literature [Zhang et al, neurIPS'18] and my personal experience. In general, the variance of WL could be 1/10 smaller than what are reported in this paper. Also, there is no clear explanations why the proposed method achieved quite large margin compared to G-UNet and GIN as shown in Table 1.
RetGK: Graph Kernels based on Return Probabilities of Random Walks, [Zhang et al, neurIPS'18]
6) In Table 3, authors tried to show the performance difference between different pooling methods. I was so surprised about the results after I looked back to Table 1. For instance, Net_top-k (G-Unet) achieved 71.5 on PTC while GUnet achieved 64.7 on PTC in Table 2. It is really hard to believe this almost 7 points difference are just due to slight different model choices in other parts beyond the main contribution - new pooling component. I feel the whole experimental results are not convincing or at least no well explanations what's going on here. Of course, based on very large variances shown in table 1 and 2, table 3 and table 4 are meaningless to look without seeing the variances for each ablation study. |
iclr_2020_r1eyceSYPr | The contrastive divergence algorithm is a popular approach to training energybased latent variable models, which has been widely used in many machine learning models such as the restricted Boltzmann machines and deep belief nets. Despite its empirical success, the contrastive divergence algorithm is also known to have biases that severely affect its convergence. In this article we propose an unbiased version of the contrastive divergence algorithm that completely removes its bias in stochastic gradient methods, based on recent advances on unbiased Markov chain Monte Carlo methods. Rigorous theoretical analysis is developed to justify the proposed algorithm, and numerical experiments show that it significantly improves the existing method. Our findings suggest that the unbiased contrastive divergence algorithm is a promising approach to training general energy-based latent variable models. | The paper proposes an algorithmic improvement that significantly simplifies training of energy-based models, such as the Restricted Boltzmann Machine. The key issue in training such models is computing the gradient of the log partition function, which can be framed as computing the expected value of f(x) = dE(x; theta) / d theta over the model distribution p(x). The canonical algorithm for this problem is Contrastive Divergence which approximates x ~ p(x) with k steps of Gibbs sampling, resulting in biased gradients. In this paper, the authors apply the recently introduced unbiased MCMC framework of Jacob et al. to completely remove the bias. The key idea is to (1) rewrite the expectation as a limit of a telescopic sum: E f(x_0) + \sum_t E f(x_t) - E f(x_{t-1}); (2) run two coupled MCMC chains, one for the “positive” part of the telescopic sum and one for the “negative” part until they converge. After convergence, all remaining terms of the sum are zero and we can stop iterating. However, the number of time steps until convergence is now random.
Other contributions of the paper are:
1. Proof that Bernoulli RBMs and other models satisfying certain conditions have finite expected number of steps and finite variance of the unbiased gradient estimator.
2. A shared random variables method for the coupled Gibbs chains that should result in faster convergence of the chains.
3. Verification of the proposed method on two synthetic datasets and a subset of MNIST, demonstrating more stable training compared to contrastive divergence and persistent contrastive divergence.
I am very excited about this paper and strongly support its acceptance, since the proposed method should revitalize research in energy-based models. While I find the experiments to be somewhat lacking, this is sufficiently offset by the theoretical contributions of the paper.
Pros
1. The paper reads well and introduces all the necessary preliminaries to understand the method. This is important, since I expect many readers to be unfamiliar with the technique.
2. The proposed method solves an important problem which, as far as I understand, has been the roadblock in large-scale training of RBMs and related models. It is also elegant and fairly straightforward to implement.
3. The proof of finite computation time and variance is very nice to have. This is because in some cases removing the bias leads to infinite variance, e.g. a parallel submission on SUMO (https://openreview.net/forum?id=SylkYeHtwr).
Cons
1. I don’t think Corollary 1 (convergence of gradient descent to the global optimum) is true for RBMs, as stated on Page 6. This is because the log-likelihood of RBM, or indeed any latent-variable model with permutation-invariant latents, is non-convex. I would suggest removing this corollary and simplifying Algorithm 2 to be regular SGD, as used in the experiments.
2. There is no experimental comparison of Algorithm 1 (the general version) and Algorithm 3 (the specialized RBM version). It seems intuitive that the specialized version should have lower computation time, but this must be confirmed.
3. The experimental section may be significantly improved.
* It is unclear what value of k (number of initial Gibbs steps) from Algorithm 2 is used.
* The experiments on just the “0” digits of MNIST seem a bit simplistic for the year 2019. It is also not clear what binarization protocol is used.
* It would be very helpful to provide estimates of the gradient (not log-likelihood) variance of each method to better understand the trade-off between the bias and the variance.
* I would also like to see the wall-clock time comparison of the methods.
Minor comments
* Page 1. Of this kind -> of this class. The data distribution p_v (v; theta) -> The model distribution
* Page 2. Property -> properties. CD-\tau -- I don’t think you can correctly refer to your method in this way, since it has at least double the computation time of CD for the same number of iterations.
* Page 3. Provides -> provide. Likelihood gradient -> log-likelihood gradient
* Algorithm 1 is an infinite loop with no break clause. It would be good to add a break statement after line 5. This would also simplify the discussion of the method.
* Page 7. I wouldn’t call the fact that CD doesn’t converge on the BAS dataset remarkable, given that it’s been reported by Fischer & Igel 2014.
* Page 9. The last paragraph stating that the proposed method is not a replacement for CD is confusing. Can you add a short experiment to demonstrate that this combination makes sense? |
iclr_2020_rkeeoeHYvr | While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating adversarial text in the discrete domain is still challenging. The main contribution of this paper is to propose a general targeted attack framework AdvCodec for adversarial text generation which addresses the challenge of discrete input space and is easily adapted to general natural language processing (NLP) tasks. In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. A tree based decoder is then applied to ensure the grammar correctness of the generated text. It also enables the flexibility of making manipulations on different levels of text, such as sentence (AdvCodec(Sent)) and word (AdvCodec(Word)) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary targeted attack. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results and human studies show that AdvCodec generated adversarial text can successfully attack the neural models without misleading the human. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from 0.703 to 0.006, and a BERT-based QA model's F1 score to drop from 88.62 to 33.21 (with best targeted attack F1 score as 46.54). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models. | Motivated by recent development of attack/defense methods addressing the vulnerability of deep CNN classifiers for images, this paper proposes an attack framework for adversarial text generation, in which an autoencoder is employed to map discrete text to a high-dimensional continuous latent space, standard iterative optimization based attack method is performed in the continuous latent space to generate adversarial latent embeddings, and a decoder generates adversarial text from the adversarial embeddings. Different generation strategies of perturbing latent embeddings at sentence level or masked word level are both explored. Adversarial text generation can take either a form of appending an adversarial sentence or a form of scattering adversarial words into different specified positions. Experiments on both sentiment classification and question answering show that the proposed attack framework outperforms some baselines. Human evaluations are also conducted.
Pros:
This paper is well-written overall. Extensive experiments are performed.
Many human studies comparing different adversarial text generation strategies and evaluating adversarial text for sentiment classification/question answering are conducted.
Cons:
1) Although the studied problem in this paper is interesting, the technical innovation is very limited. All the techniques are standard or known.
2) There are two major issues: lacking a rigorous metric of human unnoticeability and lacking justification of the advantage of the tree-based autoencoder. I think the first issue is a major problem that renders all the claims in this paper questionable. The metrics used to define adversarial images for deep CNN classifiers are indeed valid and produce unnoticeable images for human observers. But in this paper, the adversarial attack is performed in the latent embedding space, and there is no explicit constraint enforced on the output text. It’s unconvincing that this approach will generate adversarial text that seems negligible to humans. Therefore, the studied problem in this paper has a completely different nature from the one for CNN image classifiers and it is hard to convince readers that the proposed framework generates adversarial text legitimate to human readers.
3) It is unclear why tree-structured LSTM instead of a standard LSTM/GRU should be chosen in this framework for adversarial text generation. If this architecture is preferred, sufficient ablation studies should be conducted.
4) In section 3.3, the description about adversarial attacks at word level is unclear. More detailed loss function and algorithms along with equations should be provided.
5) In section 5.2, it is unclear that the majority answers on the adversarial text will, respectively, match the majority answers on the original text. Moreover, it seems that there is a large performance drop from original text to adversarial text. Therefore, it is valid to argue that whether the proposed framework can generate legitimate adversarial text to human readers or not.
6) It’s better to include many examples of generated adversarial text in the appendix.
7) Missing training details: It is unclear how the model architectures are chosen, and learning rate, optimizer, training epochs etc. are also missing. All these training details should be included in the appendix.
8) Minor: Figure 1: "Append an initial sentence...", section 3: "map discrete text into a high dimensional...", section 3.2.2: "Different from attacking sentiment analysis..." ....
In summary, the research direction of adversarial text generation studied in this paper is interesting and promising. However, some technical details are questionable, and the produced results without rigorous metrics seem to be unconvincing. |
iclr_2020_HJghoa4YDB | We discuss the approximation of the value function for infinite-horizon discounted Markov Reward Processes (MRP) with nonlinear functions trained with the Temporal-Difference (TD) learning algorithm. We consider this problem under a certain scaling of the approximating function, leading to a regime called lazy training. In this regime the parameters of the model vary only slightly during the learning process, a feature that has recently been observed in the training of neural networks, where the scaling we study arises naturally, implicit in the initialization of their parameters. Both in the under-and over-parametrized frameworks, we prove exponential convergence to local, respectively global minimizers of the above algorithm in the lazy training regime. We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks. | The paper discusses the policy evaluation problem using temporal-difference (TD) learning with nonlinear function approximation. The authors show that in the “lazy training” regime both over- and under-parametrized approximators converge exponentially fast, the former to the global minimum of the projected TD error and the latter to a local minimizer of the same error surface. Simply put, the lazy regime refers to the approximator behaving as if it had been linearized around its initialization point. This can happen if the approximation is rescaled, but can also occur as a side-effect of its initialization. The authors present simple numerical examples illustrating their claims.
Although I did not carefully check the math, this seems like a solid contribution on the technical side. My main concern about the paper is that it falls short in providing intuition and contextualizing its technical content. Regarding the presentation, I believe it is possible to have a less dry prose without sacrificing mathematical rigor. If some of the technical material is moved to the appendix --like auxiliary results, discussion on proof techniques, etc--, the additional space could be used to discuss the implications of the theoretical results in more accessible terms.
For example, a subject that ought to be discussed more clearly is the nature of the approximation induced by the lazy training regime. As far as I understand, this regime can be thought of as a sort of regularization that severely limits the capacity of the approximator. Although the authors mention in the conclusion that “...convergence of lazy models may come at the expense of their expressivity”, after reading the paper I do not have a clear sense of how expressive such models actually are. In their experiments, Chizat et al. (2018) observed that the performance of commonly used neural networks degrades when trained in the lazy regime --to a point that they consider it unlikely that the successes of deep learning can be credited to this regime of training. It seems to me that this subject should be more explicitly discussed in a paper that sets out to provide theoretical support for deep reinforcement learning.
Still regarding the behavior of lazy approximators, my intuitive understanding is that they work as a linear model using random features. If this interpretation is correct, this makes the theoretical results a bit less surprising. They are still interesting, though, for they can be seen as relying on a “smoother” version of the linearity assumption often made in the related literature. Maybe this is also something worth discussing? Still on this subject, it seems to me that one potential disadvantage of lazy models with respect to their linear counterparts is that it is less clear how to enforce the lazy regime in practice. In Section 4.2 the authors discuss how this can naturally happen as a side-effect of the initialization, but it is unclear how applicable the particular strategy used to illustrate this phenomenon, with the “doubling trick”, is in practice. This is another example of a less technical discussion that would make the paper a stronger contribution. |
iclr_2020_H1e-X64FDB | We present fast implementations of linear interpolation operators for piecewise linear functions and multi-dimensional look-up tables. We use a compiler-based solution using MLIR to accelerate this family of workloads. On two-layer and deep lattice networks, we show these strategies run 5 − 10× faster on a standard CPU compared to a C++ interpreter implementation that uses prior techniques, producing runtimes that are 1000s of times faster than TensorFlow 2.0 for single evaluations. | This paper proposes several low-level code optimizations aimed at speeding up evaluation time for linearly interpolated look-up tables, a method often used in situations where fast evaluation times are required. The paper is very practically oriented in the sense that it does not aim for improving asymptotic speed-ups, but rather real speed-ups expressible in terms of CPU cycles that are achievable by exploiting compiler optimizations such as branch prediction and loop unrolling. The focus is on unbatched computations which can arise in many real-time situations. The proposed implementation techniques are as follows:
- A method for fast index mapping, which first transforms inputs using a monotonic function such as log_2 or 2^x, and the applying a branch-free linear search implementation on a uniformly-spaced auxiliary LUT.
- A memory-efficient bit-packing technique to store both integer and floating point representations together.
- Speed-up for multilinear interpolation using latency hiding
- Branch-free implementation of sorting permutation, needed for simplex interpolation
The proposed implementation is then evaluated on 4 different benchmarks, with considerable speed gains over interpreter-based baselines, while batching and single-vs-double precision do not have major impact on speed.
While the paper is relevant and interesting, and the proposed techniques are reasonable and probably result of a considerable amount of work, more effort is needed to improve clarity and preciseness of the explanations, and (most importantly) the experimental evaluation. Detailed strengths and weaknesses are outlined below.
Strengths
- The paper is well-motivated and relevant to the ML community
- Low-level speed optimizations are needed but overlooked in the community
- Reasonable choice of experimental conditions (focus on unbatched CPU evaluation, testing on a selection of 4 different tasks)
- Proposed techniques are sensible
Weaknesses (roughly in order of decreasing significance)
- Gains over Tensorflow performance is advertised in the intro, but only mentioned anecdotally in the experiments. Also, the Tensorflow implementation should be briefly explained to make clear where these gains come from.
- The experiments put much focus on speed performance over different batch sizes, but this is (1) not the focus of the paper (unbatched CPU operation is the focus), and (2) is little informative because the introduced methods (which do not benefit much from batching) are not compared against Tensorflow (which does benefit from batching).
- No ablation study is presented. The method description mentions informally how much speed-up is to be expected from different parts of the proposed method, but do not clarify how much this contributes to overall speed-gains.
- The description in 4.1 is rather hard to follow, even though the ideas behind it are relatively simple. An illustrative figure might be of help for readers.
- The method description in 4.1 lacks formal preciseness. For example, in 4.1. alpha, P, supp, are not defined (or in some cases introduced much after they are used first), and the “+4” in the beginning of page 5 appears out of nowhere.
- The proposed bit-packing is not well motivated. It promises to save half of the memory at a minor loss in precision (at least for the double-precision case), but it is unclear how much of a bottleneck this memory consumption is in the first place. In addition, it remains unclear to me why this is relevant particularly in the shared index case.
- While the topic is relevant for this conference, readers are likely not familiar with some of the concepts used in the paper, and a bit more explanation is needed. An example for this is “branch prediction”, which is a key concept; readers unfamiliar with this compiler concept will likely not understand the paper, and a brief explanation is needed. Another example is “loop-carry dependency”, a term that could be explained in a short footnote. A third example is FPGA, which is mentioned in the very last sentence without further explanation/justification.
- The introduction could be a bit more concrete on describing tasks where PWLs are relevant |
iclr_2020_H1lxeRNYvB | Existing neural architecture search (NAS) methods explore a limited featuretransformation-only search space, ignoring other advanced feature operations such as feature self-calibration by attention and dynamic convolutions. This disables the NAS algorithms to discover more advanced network architectures. We address this limitation by additionally exploiting feature self-calibration operations, resulting in a heterogeneous search space. To solve the challenges of operation heterogeneity and significantly larger search space, we formulate a neural operator search (NOS) method. NOS presents a novel heterogeneous residual block for integrating the heterogeneous operations in a unified structure, and an attention guided search strategy for facilitating the search process over a vast space. Extensive experiments show that NOS can search novel cell architectures with highly competitive performance on the CIFAR and ImageNet benchmarks. | Summary:
Often in (neural architecture search) NAS papers on vision datasets, only feature transform operations (e.g. convolution, pooling, etc) are considered while operations like attention and dynamic convolution are left out. These operations have shown increasing performance improvements across many domains and datasets. This paper attempts to incorporate these operations into the search space of NAS.
Specifically they design a new cell (residual block in a resnet backbone) which contains these operations (traditionally used feature transforms, attention, and dynamic convolutions). This new cell contains two-tiers. The first tier separately computes the three kinds of operations and combines the results via elementwise summation to form an 'intermediate calibrated tensor'. This is then fed to the second tier where again the three kinds of operations are performed on them and they net output is again formed via elementwise summation.
For the search algorithm the authors use the bilevel optimization of DARTS. The one difference to aid in the search is that an 'attention-guided' transfer mechanism is used where a teacher network trained on the larger dataset (like ImageNet) is used to align the intermediate activations of the proxy network found during search, by adding a alignment loss function.
Results of search with this new search space on cifar10 and transfer to cifar100, imagenet are presented. It is found via manual ablation in resnet backbone that dilated dynamic convolutions dont help and are dropped from the search space.
The numerical results are near state-of-the-art although as is pervasive in the NAS field actual fair comparisons between methods are hard to get due to differences in search space, hardware space, stochasticity in training etc. But the authors do a best-attempt and have included random search and best and average performances so that is good.
Comments:
- The paper is generally well-written and easy to understand (thanks!)
- Experiments are generally thorough.
- The main novelty is the design of the cell such that attention and dynamic convolution operations can be incorporated. I was hoping that the results in terms of error would be much better due to the more expressive search space but they are not at the moment.
- I am curious how the attention guided alignment loss has been ablated. How do the numbers look without alignment loss keeping gpu training time constant? Basically I am trying to figure out how much is the contribution of the new search space vs. the addition of attention guided search. Do we have a negative result (an important one though) that attention and dynamic convolutions don't really help? |
iclr_2020_rklVOnNtwH | In this paper, we tackle the problem of detecting samples that are not drawn from the training distribution, i.e., out-of-distribution (OOD) samples, in classification. Many previous studies have attempted to solve this problem by regarding samples with low classification confidence as OOD examples using deep neural networks (DNNs). However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples. This problem arises because their approaches use only the features close to the output layer and disregard the uncertainty of the features. Therefore, we propose a method that extracts the uncertainties of features in each layer of DNNs using a reparameterization trick and combines them. In experiments, our method outperforms the existing methods by a large margin, achieving state-of-the-art detection performance on several datasets and classification models. For example, our method increases the AUROC score of prior work (83.8%) to 99.8% in DenseNet on the CIFAR-100 and Tiny-ImageNet datasets. | The paper considers the problem of out-of-distribution (OOD) sample detection while solving a classification task. The authors tackle the problem of OOD detection with exploiting uncertainty while passing a test sample through the neural network. They treat outputs of (some) layers in a NN as random Gaussian-distributed variables and measure uncertainty as variance of these Gaussians. Then when uncertainty is high, OOD is detected.
The overall idea behind the paper could be interesting, but its realisation in the current form is questionable.
The paper seems totally misusing the reparametrisation trick and stochastic outputs of layers in NNs. Eq. (2) is not the objective of variational inference that seems to be required for stochastic outputs and the reparametrisation trick as presented before the equation. The objective misses the KL-divergence term! Without it what would stop a neural net to set sigmas to 0 and forget about the stochasticity altogether? Not to mention that the current objective is not mathematically justified.
If there is no mix and error in eq. (2) and the networks were trained using this loss (and based on provided code they were using this loss), my wild guess of explaining why this may give best results in the experiments is that the models were trained for surprisingly small number of epochs. Therefore, a hypothesis would be that this small number of epochs did not allow the networks to switch sigmas to 0.
Until the authors can clarify and justify the objective, I will vote for rejection only based on this ground.
However, there are other issues in the paper as well. First of all, its clarity. It seems that the paper requires a lot of polishing. The first paragraph of this review is based on my assumptions from the paper since I am not completely sure I understand it correctly. More about the clarity issues below
For strong evaluation, comparison with Malinin & Gales (2018) work seems to be important since it was the only work also using uncertainties for OOD detection in related work. Also related work section does not look like an exhaustive overview.
Some of the detailed comments:
1. “In other words, in-distribution samples possess more features that convolutional filters react to than OOD samples” – first of all, this sentence is not easy to parse. Secondly, it is unclear, why this should be true. If OOD samples are still natural images, they would contain edges just the same as in-distribution samples. That is the power of deep learning enabling transfer learning, that low-level features are the transferable across different data and tasks. Therefore, the claim that “Therefore, the uncertainties of the features will be larger when the inputs are in-distribution samples” requires more elaboration and arguments
2. The arguments of the next paragraph regarding uncertainty of deeper layers should be larger for OOD samples are not very convincing either. It is either requires a definition what the authors mean here as uncertainty, or it is not necessarily true that absence of fixed regions for embeddings leads to higher uncertainty.
3. 3rd and 4th paragraphs in Introduction have too many repetitions of phrases between each other. Compare, e.g. the first sentences of the paragraphs or the last sentences.
4. “One cause of the abovementioned problem is that their approaches” and similarly the next paragraph: “their approaches” stylistically sound wrong. It is appropriate in the previous paragraph since there is a link to “previous studies”. It seems that “these approaches” or “the existing approaches” would be a better choice for this and the next paragraph.
5. “Each uncertainty is easily estimated after training the discriminative model by computing the mean and the variance of their features using a reparameterization trick” – conventional discriminative models do not estimate mean and variances of the features. The issue of estimating uncertainty is addressed by several special methods such as Bayesian variational methods used in the referred papers. Therefore, in order to use a reparametrisation trick one need to firstly choose a special class of models, which is not obvious from the text.
6. “Moreover, UFEL is robust to hyperparameters such as the number of in-distribution classes and the validation dataset.” – the size of the validation dataset? In any case neither size of the validation dataset nor the validation dataset itself are not hyperparameters (should not be hyperparameters for out-of-distribution detection). The number of classes can hardly be called a hyperparameter also.
7. “depends on the difference in the Dirichlet distribution of the categorical parameter <…> In our work, the distribution of the logit of the categorical parameters” – what is/are this/these categorical parameter(s)?
8. “Further, they estimate the parameter of the Dirichlet distribution using a DNN and train the model with in-distribution and OOD datasets” – this sentence may mislead to impression that the proposed method does not need OOD dataset for training, which does not seem to be the case, since \lambda and \theta are trained based on OOD samples
9. “because they will not be relevant to the classification accuracy” – who are they?
10. “and \epsilon is the Gaussian noise” –> the standard Gaussian noise
11. “where z^0 = x” – it seems this should be placed somewhere earlier when z^l is introduced since z^0 is not used in eq.(2) after which this text is placed
12. It is unclear how \lambda^l and CNN \theta are learnt
13. It is unclear how the values of features d(x) are used to detect OOD samples
14. “comparison methods, and models” – not clear what models mean here
15. Missing references to datasets in the main text. At least reference to Appendix A.2 is required
16. “We used 5,000 validation images split from each training dataset and chose the parameter that can obtain” – which parameter?
17. “All the hyperparameters of ODIN” – a reader does not know yet that ODIN is used for comparison
18. “which consists of 100 OOD images from the test dataset and 1,000 images from the in-distribution validation set” – it is a bit confusing to call OOD dataset as a test dataset in this context
19. “We tuned the parameters of the CNN in Equation 4 using 50 validation training images taken from the 100 validation images. The best parameters were chosen by validating the performance using the rest of 50 validation images.” – this is confusing. What parameters do the authors talk about in the second sentence if not the parameters of the CNN?
20. “We used TNR at 95% TPR, AUROC, AUPR, and accuracy (ACC),” – Some elaboration is required, at least the reference to Appendix A.1. What is the changing threshold for AUROC and AUPR? Why AUPR-In and AUPR-Out are considered and only a single AUROC is considered. What is the positive class for AUROC?
23. “For LeNet5, we increased the number of channels of the original LeNet5 to improve accuracy” – do the authors mean that they allowed RGB images as input rather than greyscale? If yes, this explicit explanation would be preferable
24. “We inserted the reparameterization trick” – not the best word choice. Reparametrisation trick is a computational/implementation trick/method and it is hard to say that it can be inserted into a network. I believe what the authors mean is that they inserted mean/std outputs instead of point outputs. Conceptually, this means that the output of the corresponding layers is considered to be stochastic rather than deterministic.
Also, it is unclear when the authors say they insert it to the softmax layer. According to Section 3 the softmax layer is never considered to output means and stds.
25. The numbers of epochs for training NNs are very small for LeNet and WideResNet in the experiments. Did the models manage to converge during this short training?
Minor:
1. “These data were also used” -> “this data” |
iclr_2020_Byg_vREtvB | In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as "Bayesian Dark Knowledge." Our generalized framework applies to the case of classification models and takes as input the architecture of a "teacher" network, a general posterior expectation of interest, and the architecture of a "student" network. The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model. We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off. We present experimental results investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures. We establish the key result that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance. Lastly, we show that student architecture search methods can identify student models with significantly improved performance. | Contributions:
The paper considers the distillation of a Bayesian neural network as presented in [Balan et al. 2015]
The main contribution of the paper is the extension of [Balan et al. 2015] to apply to general posterior expectations instead of being restricted to predictions.
A second contribution of the paper is the finding that restricting the architecture of the student network to coincide with the teacher can lead to suboptimal performance and this can be mitigated by expanding the student's architecture using architecture search.
Originality/Significance:
I want to discuss the result regarding the generalization of the posterior expectation (section 3.1). To my knowledge this is novel, however, I am failing to see the importance of this result. The paper mentions two cases as motivations: to calculate the entropy and the variance of the marginal. The problem with these two examples is that it is unclear why these are important and they are not used in the experiments anywhere. Their use should be motivated and the performance of the distillation should be properly evaluated in the experiments.
The result regarding architecture search is interesting, but it should be expanded and explained more to be a main contribution.
Clarity:
The paper generally understandable, and well written, but it could be better organized. It should expand on the motivation and the architecture search, since these are key components of the paper. The figures are not legible. Even fully zoomed in, they are difficult to read.
Overall assessment:
The paper has some interesting ideas, but it lacks motivation and significant results.
_______________________________________________________________________
Response to the rebuttal:
Thank you for the detailed reply.
> A primary motivation for generalising posterior expectations is to help quantify model uncertainty. Indeed, the expectation of the predictive entropy is an important quantity that is distinct from the entropy of the posterior predictive distribution. For instance, the difference between these two quantities is exactly the BALD score used in active learning [1]. ...
BALD would be problematic to use with this framework due to computational costs. The main benefit of distillation is the reduced computational cost at inference time. But training itself is still expensive. In BALD, the bulk of the computational cost is fitting the model after each new observation. Distillation does not provide a speedup here.
If the method indeed works well in an active learning setting, it would be interesting to see experiments showcasing this result.
> - The result regarding architecture search is interesting, but it should be expanded and explained more to be a main contribution.
I was hoping for more discussion/guidance on finding the right architecture or perhaps an algorithm that efficiently optimises the architecture. But I understand that this is more of a future work so I am not holding this against the paper.
I realise that my initial assessment was rather short so I decided to increase my rating and lower my confidence score. I think the paper is borderline, but I am slightly leaning towards rejection due to the insufficient motivation. |
iclr_2020_HylwpREtDr | Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of labeled graphs is difficult to obtain, which significantly limits the true success of GNNs. Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present an investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning. With a theoretical bound analysis we justify the design choice of our approach. In our experiments on four benchmark datasets, the proposed method outperforms other representative baseline methods consistently and significantly. | The authors propose an interesting method to actively select samples using the embeddings learned from GNNs. The proposed method combines graph embeddings and clustering to intelligently select new node samples. Theoretical analysis is provided to support the effectiveness and experimental results shows that this method can outperform many other active learning methods.
This paper can be improved on the following aspects:
1. The proposed method conducts clustering using node embeddings. Although these embeddings have encoded graph structure to some extent, I would suggest explicitly incorporating the graph structure in clustering or at least comparing to a baseline on that. The proposed method conducts embedding learning and clustering in two consecutive but separate steps. It would be interesting to see that the clustering can also leverage the graph information.
2. It would be better to provide more details about network settings (some hyperparams have already been given in the paper), and more analysis would be helpful. For example, how the number of clusters affects the performance?
3. Is it possible to create a scenario where there are more labeled data from one cluster but less data from another cluster? In this case, should we still take equal amount of samples from different clusters? |
iclr_2020_SJlxglSFPB | The detection of out of distribution samples for image classification has been widely researched. Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution. This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects. It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task. The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts. | # Summary
This paper puts forward a study over out-of-distribution (OOD) detection for semantic segmentation. OOD detection is an active area research which has recently dealt mostly with image level classification. For semantic segmentation the same conclusions might not apply since decisions must be taken for each pixel individually. To this effect the authors propose here to study this task over a set of architectures (PSPNet, DeepLabV3+), outlier datasets (SUN, Indian Driving Dataset, synthetic images) and multiple methods for OOD detection from recent works on image classification.
A major difficulty in OOD works is the definition of a relevant OOD dataset and evaluation setup, and the authors propose here a novel setup for this task by adjusting the SUN and IDD datasets as OOD for Cityscapes.
The experimental part is thorough with multiple evaluation metrics and some qualitative examples and discussion.
# Rating
Although the paper studies a meaningful and interesting problem, I would reject the paper for the following reasons (more detailed arguments some lines below):
1) I find that the proposed dataset setups for OOD detection are not adequate for semantic segmentation and thus alter the reliability of the results.
2) Other than the evaluation (which has some faults), there is no proposed method addressing this task and its challenges.
3) The proposed MaxIoU metric proposed is not discussed and compared in more detail against the usual ones for this task.
# Strong points
- This work is dealing with a highly interesting and challenging problem. Indeed there has been few studies in this area and defining proper techniques and evaluation setups is challenging.
- The authors consider a wide array of methods for evaluation and test them across two architectures and multiple real and synthetic datasets.
# Weak points
- There are two real datasets considered for OOD evaluation, but I consider there are some flaws in their utilization for OOD detection. First, the SUN dataset is quite different from Cityscapes with a significant domain gap (at least in the visual appearance and distributions of the classes) between the two datasets. Upsampling the SUN images to 5x their size in order to make them compatible to Cityscapes should increase the artefacts even further making it easier to spot the gap between SUN and Cityscapes. This means that there is a risk of having the OOD detector acting merely as domain classifier spotting whenever the domain is different from the one use for training.
This argument applies to IDD, although to a smaller extent as the datasets are both automotive, but again there are strong visual differences between samples of the same class across the two datasets.
The authors argue that they somehow take this argument into consideration (Figure 2) and select only the non-ID classes to perform evaluation. However in both networks the scene information is mixed into the representation (via pyramid pooling in PSPNet respectively via a trous convolutions with different dilations in DeepLabV3+). So again, instead of an OOD detector we can end up doing domain classification.
It would be useful to see how is the classification performance changing for ID classes, e.g. how is a model trained on Cityscapes scoring for cars and other ID objects in IDD comparing to a model that is trained on IDD for the same classes. A big difference between these scores would correspond to a significant domain gap and in correlation with the OOD performance we might be able to take some more conclusions on the matter.
Ahmed and Courville[i] propose an interesting discussion on this type of problems and propose focusing on semantic anomaly detection, i.e. detecting different classes from the same dataset, to make sure the setting has practical interest. They propose a very basic technique to detect OOD in previous classification setups showing the limitations of previous OOD methods. I encourage the authors to check the arguments stated in that work.
- In my opinion, the Fishyscapes work from Blum et al. is unfairly dismissed here by considering only a part of the benchmark for which animals from COCO and internet are inserted over Cityscapes images. The authors argue that this lack of realism of the inserted images make this dataset insufficient for OOD detection. However in the first version of their paper, Blum et al. propose a mix of Foggy Driving, Foggy Zurich, WildDash and Mapillary as dataset for ODD detection, which is similar with the setup proposed here. Furthermore, the latest version of Fishyscapes includes the Lost & Found dataset (mentioned in Fig. 1 here) which is recorded in similar conditions with Cityscapes with the addition of a few small outlier objects used as OOD. This is a relevant dataset and work and I would adjust the critics brought to their work here. That paper has the same objectives and endeavors as the current submission.
- Although there are some discussions and experiments on multiple techniques there is not technical contribution mitigating all the limitations of previous OOD methods on classification and the challenges of OOD detection in semantic segmentation. This would have had greatly helped the paper.
- I find that the MaxIoU metric considered here is not sufficiently discussed and analysed to show its utility and the additional perspective it brings when evaluating along with the usual metrics.
## Other less important weak points
- The choice of dataset in Figure 1 , i.e. Lost & Found, can be misleading. This dataset is not further mentioned and evaluated in the rest of the paper. The image could be replaced with a qualitative results from the rest of the evaluation.
# Suggestions for improving the paper:
1) The current evaluation setting could have some flaws. I would propose some sanity checks and look at the classification performances over other ID classes, as suggested in the previous section
2) Evaluate on a setting similar to Fishyscapes Lost and Found, in which the dataset does not change much, but there are some novel objects.
3) Include a trivial OOD baseline in the spirit of [i] to show the utility of the proposed datasets for this task by being robust to such baselines.
4) Consider extending the breadth of OOD methods with Deep Prior Networks[ii] which have been shown to perform well on Fishyscapes for OOD detection.
5) Add a qualitative example with OOD detection on Perlin noise images
# References
[i] F. Ahmed and A. Courville, Detecting semantic anomalies, arxiv 2019 https://arxiv.org/abs/1908.04388
[ii] A. Malinin and M. Gales, Predictive uncertainty estimation via prior networks, NeurIPS 2018 |
iclr_2020_SyxC9TEtPH | In this work, we address the task of natural image generation guided by a conditioning input. We introduce a new architecture called conditional invertible neural network (cINN). It combines the purely generative INN model with an unconstrained feed-forward network, which efficiently preprocesses the conditioning input into useful features. All parameters of a cINN are jointly optimized with a stable, maximum likelihood-based training procedure. Even though INNs and other normalizing flow models have received very little attention in the literature in contrast to GANs, we find that cINNs can achieve comparable quality, with some remarkable properties absent in cGANs, e.g. apparent immunity to mode collapse. We demonstrate these properties for the tasks of MNIST digit generation and image colorization. Furthermore, we take advantage of our bidirectional cINN architecture to explore and manipulate emergent properties of the latent space, such as changing the image style in an intuitive way. | This paper proposes conditional Invertible Neural Networks (cINN), which introduces conditioning to conventional flow-based generative models. Conditioning is injected into the model via a conditional affine coupling block, which concatenates conditioning with the input to the scaling and shifting sub-networks in the coupling block. Other small modification are proposed to improve training stability at higher learning rates, including soft clamping of scaling coefficients, and Haar wavelet downsampling, which is proposed to replace the squeeze operation (pixel shuffle) that is often used in flow-based models. The invertibility of the cINN allows for style transfer by projecting images into the latent space, and then reconstructing the image with a different conditioning. The performance of the cINN is evaluated empirically on the task of colorization, where it is shown to outperform other techniques in terms of nearness to the true colours, as well as sample diversity.
Overall, I would tend to vote for accepting this paper. The base method for integrating conditioning into the flow is simple and intuitive, and additional modifications which allow for stable training at higher learning rates, such as soft clamping and Haar wavelet downsampling, appear to be very effective. Conditional models often lend themselves to a wide variety of useful applications, so I think this work could be of interest to many.
My primary concerns with this paper are related to the comparison of image colorization methods. Specifically:
1) I would like to see a comparison to Probabilistic Image Colorization (PIC) [1], which was mentioned in the related work section but not included in the comparison of colorization models. PIC has been shown to outperform VAE-MDN in terms of diversity, so it would be good to include it. Code is available online with pretrained ImageNet models (https://github.com/ameroyer/PIC), so it should not be difficult to add.
2) Pix2pix is known to have very bad sample diversity. A more useful comparison would be to evaluate one of the newer variants of Pix2pix that emphasizes sample diversity, such as BicycleGAN [2] or MSGAN [3]. Code is also available for each of these models (https://github.com/junyanz/BicycleGAN, https://github.com/HelenMao/MSGAN), although you would need to train models from scratch.
3) Pixel-wise metrics such as MSE are bad at measuring perceptual similarity. While these pixel-wise metrics are still useful for comparison to prior work, better metrics are available for evaluating image colorization. I would recommend the use of Learned Perceptual Image Patch Similarity (LPIPS) [4] in place of pixel-wise distance measures for evaluating image similarity and diversity.
Things to improve the paper that did not impact the score:
5) I was somewhat disappointed by how little attention was spent on the Haar wavelet downsampling method. It seems like a very neat idea, but it is only briefly explored in the ablation study. It would be nice to include a more in-depth study of how it compares to the conventional pixel shuffle downsampling, perhaps in terms of stability with learning rates, and final model performance.
6) There is some potentially related work on conditional adversarial generative flows [5] that could be added to the literature review if deemed relevant enough.
References:
[1] Amelie Royer, Alexander Kolesnikov, and Christoph H. Lampert. Probabilistic image colorization. In British Machine Vision Conference (BMVC), 2017.
[2] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017.
[3] Mao, Qi, et al. "Mode seeking generative adversarial networks for diverse image synthesis." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
[4] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
[5] Liu, Rui, et al. "Conditional Adversarial Generative Flow for Controllable Image Synthesis." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
### Post-Rebuttal Comments ###
Thank you for including the PIC results. I did not realize that they would take so long to run, but I think it important to include comparison with the current SOTA methods so that we have some reference. It is also nice to see the addition of the LPIPS metric. My overall opinion of the paper is not much changed from before, so I will retain the same score. |
iclr_2020_rylkma4twr | In this paper, we study the problem of constrained robust (min-max) optimization in a black-box setting, where the desired optimizer cannot access the gradients of the objective function but may query its values. We present a principled optimization framework, integrating a zeroth-order (ZO) gradient estimator with an alternating projected stochastic gradient descent-ascent method, where the former only requires a small number of function queries and the later needs just one-step descent/ascent update. We show that the proposed framework, referred to as ZO-Min-Max, has a sub-linear convergence rate under mild conditions and scales gracefully with problem size. From an application side, we explore a promising connection between black-box min-max optimization and black-box evasion and poisoning attacks in adversarial machine learning (ML). Our empirical evaluations on these use cases demonstrate the effectiveness of our approach and its scalability to dimensions that prohibit using recent black-box solvers. | The paper presents an algorithm for performing min-max optimisation without gradients and analyses its convergence. The algorithm is evaluated for the min-max problems that arise in the context of adversarial attacks. The presented algorithm is a natural application of a zeroth-order gradient estimator and the authors also prove that the algorithm has a sublinear convergence rate (in a specific sense).
Considering that the algorithm merely applies the zeroth-order gradient estimator to min-max problems, the algorithm itself only makes up a somewhat novel contribution. However, to the best of my knowledge, it has not been used in this context before and personally I find the algorithm quite appealing. In fact, due to its simplicity it is essentially something that anyone could implement from scratch.
Perhaps a more important contribution is that the authors provide a fairly extensive convergence analysis, which is an important tool in analysing the algorithm and its properties. Unfortunately, it is not trivial to understand the presented convergence results and their practical implications (if any). For instance, equation (10), which is arguably one of the key equations in the paper, contains variables zeta, nu and P1, all of which depend on a number of other variables in a fairly complicated manner. The expression in (10) also contains terms that do not depend on T and it is not obvious how large these terms might be in practice (in the event that the assumptions are at least approximately true in a local region). Even though I am somewhat sceptical to the practical relevance of this convergence analysis, I recognise that it is an interesting and fascinating achievement that the authors have managed to provide a convergence analysis of an algorithm which is based on black-box min-max optimisation. |
iclr_2020_S1gmrxHFvB | Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unseen corruptions and perturbations encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AUGMIX, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unseen corruptions. AUGMIX significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half. | This paper proposes a method called AugMix, which is intended to improve model robustness to data distribution shift. AugMix appears fairly simple to implement. Several new images are created by augmenting an original image through chains of sequentially applied transformations (the "Aug" part of AugMix), then the augmented images are combined together, along with the original image, via a weighted sum (the "Mix" part of AugMix). Additionally, a Jensen-Shannon Divergence consistency loss is applied during training to encourage the model to make similar predictions for all augmented variations of a single image. This technique is shown to achieve state-of-the-art performance on standard robustness benchmarks without loss of clean test accuracy, and is also shown to improve calibration of model confidence estimates.
Overall, I would tend to vote for accepting this paper. The method is simple yet effective, the paper is very well written and easy to follow, and experiments are extensive and, for the most part, convincing.
Questions:
1) The one main concern I have with the training procedure is the amount of time the models were trained for. It is known that model trained with aggressive data augmentation schemes often require much longer training than normal in order to fully benefit from the stronger augmentations. For example, AutoAugment trains ImageNet models for 270 epochs [1], while CutMix trains for 300 epochs [2]. However, the ImageNet experiments in this paper claim to follow the procedure outlined in [3], which only trains for 90 epochs. This is reflected in the clean test accuracy, where AutoAugment only appears to provide a 0.3% gain over standard training, while we might expect 1.3% improvement (according to [2]). The AllConvNet and WideResNet in CIFAR-10 and CIFAR-100 experiments were also trained for only 100 epochs each, where 200 is more conventional. Again this shows in the reported numbers: on WideResNet for CIFAR-10, Mixup only has a 0.3% gain were as we might expect 1% improvement instead [4], and AutoAugment has 0.4% improvement, were as we might expect 1.3% gain if trained longer [1]. My question then is, how much does training time affect results? Do AugMix, and other techniques such as Mixup, CutMix, and AutoAugment, achieve better robustness when models are trained for longer, or do they become more brittle as training time is extended?
2) For the Jensen-Shannon divergence consistency, how much worse does it perform when using JS(Porig;Paugmix1) versus JS(Porig;Paugmix1;Paugmix2)? What might cause this behaviour?
3) Patch Gaussian is changed to Patch Uniform to avoid overlap with corruptions in ImageNet-C. How does Patch Uniform compare to Patch Gaussian in terms of performance for non-Gaussian noise corruptions?
4) How does AugMix perform as an augmentation technique in terms of clean test accuracy compared to other SOTA techniques? Is there a trade-off between clean test accuracy and robustness, or does AugMix improve performance in both domains? Can AugMix be combined with other augmentation techniques or does this destroy robustness properties?
Things to improve the paper that did not impact the score:
5) It would be nice if the best result in each column could be bolded in Tables 2-4.
References:
[1] Cubuk, Ekin D., Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. "Autoaugment: Learning augmentation policies from data." CVPR (2019).
[2] Yun, Sangdoo, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. "Cutmix: Regularization strategy to train strong classifiers with localizable features." ICCV (2019).
[3] Goyal, Priya, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. "Accurate, large minibatch sgd: Training imagenet in 1 hour." arXiv preprint arXiv:1706.02677 (2017).
[4] Zhang, Hongyi, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. "mixup: Beyond empirical risk minimization." ICLR (2018). |
iclr_2020_Bye8hREtvB | Deep autoregressive models are one of the most powerful models that exist today which achieve state-of-the-art bits per dim. However, they lie at a strict disadvantage when it comes to controlled sample generation compared to latent variable models. Latent variable models such as VAEs and normalizing flows allow meaningful semantic manipulations in latent space, which autoregressive models do not have. In this paper, we propose using Fisher scores as a method to extract embeddings from an autoregressive model to use for interpolation and show that our method provides more meaningful sample manipulation compared to alternate embeddings such as network activations. | Motivated by the observation that powerful deep autoregressive models such as PixelCNNs lack the ability to produce semantically meaningful latent embeddings and generate visually appealing interpolated images by latent representation manipulations, this paper proposes using Fisher scores projected to a reasonably low-dimensional space as latent embeddings for image manipulations. A decoder based on a CNN, a Conditional RealNVP, or a Conditional Pyramid PixelCNN is used to decode high-dimensional images from these projected Fisher score. Experiments with different autoregressive and decoder architectures are conducted on MNIST and CelebA datasets are conducted.
Pros:
This paper is well-written overall and the method is clearly presented.
Cons:
1) It is well-known that the latent activations of deep autoregressive models don’t contain much semantically meaningful information. It is very obvious that either a CNN decoder, a conditional RealNVP decoder, or a conditional Pyramid PixelCNN decoder conditioned on projected Fisher scores will produce better images because the Fisher scores simply contain much more information about the images than the latent activations. When the $\alpha$ is small, the learned decoder will function similarly to the original pixelCNN, therefore, latent activations produce smaller FID scores than projected Fisher scores for small $\alpha$’s. These results are not surprising. Detailed explanations should be added here.
2) The comparisons to baselines are unfair. As mentioned in 1), it’s obvious that Fisher scores contain more information than latent activations for deep autoregressive models and are better suited for manipulations. Fair comparisons should be performed against other latent variable models such as flow models and VAEs with more interesting tasks, which will make the paper much stronger.
3) In Figure 3, how is the reconstruction error calculated? It’s squared error per pixel per image?
4) On pp. 8, for semantic manipulations, some quantitative evaluations will strengthen this part.
In summary, this paper proposes a novel method based on projected Fisher scores for performing semantically meaningful image manipulations under the framework of deep autoregressive models. However, the experiments are not well-designed and the results are unconvincing. I like the idea proposed in the paper and strongly encourage the authors to seriously address the raised questions regarding experiments and comparisons.
------------------
After Rebuttal:
I took back what I said. It's not that obvious that the "latent activations of deep autoregressive models don’t contain much semantically meaningful information". But the latent activations are indeed a weak baseline considering that PixelCNN is so powerful a generator. If the autoregressive generator is powerful enough, the latent activations can theoretically encode nothing. I have spent a lot of time reviewing this paper and related papers, the technical explanation about the hidden activation calculation of PixelCNN used in this paper is unclear and lacking (please use equations not just words).
Related paper: The PixelVAE paper ( https://openreview.net/pdf?id=BJKYvt5lg ) explains that PixelCNN doesn't learn a good hidden representation for downstream tasks
Another paper combining VAE and PixelCNN also mentions this point:
ECML 2018: http://www.ecmlpkdd2018.org/wp-content/uploads/2018/09/455.pdf
Please also check the related arguments about PixcelCNN (and the "Unconditional Decoder" results) in Variational Lossy Autoencoder (https://arxiv.org/pdf/1611.02731.pdf )
As I mentioned in the response to the authors' rebuttal, training a separate powerful conditional generative model from some useful condition information (Fisher scores) is feasible to capture the global information in the condition, which is obvious to me. This separate powerful decoder has nothing to do with PixelCNN, which is the major reason that I vote reject. |
iclr_2020_HJgJtT4tvB | Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models. | This paper presents a small multiple choice reading comprehension dataset drawn from LSAT and GMAT exams. I like the idea of using these standardized tests as benchmarks for machine reading, and I think this will be a valuable resource for the community.
The two major concerns I have with this paper are with its presentation and with the quality of the baselines. These concerns leave me at a weak accept, instead of a strong accept, which I would be if these issues were fixed.
Presentation:
The main contribution here is in collecting a new dataset drawn from the LSAT and GMAT. (I assume that is all that is used, though the text actually says "such as". Or is this from practice exams and not the actual exams?) The job of this paper is to convince me, a researcher focusing on reading comprehension, that I should use this dataset as a target of my research. Out of 8 pages, however, at most 2 are devoted to actually describing this contribution and why it's a good target for reading comprehension. Much of the interesting description of the phenomena in the dataset is relegated to the appendix, which I did not really look at because it far exceeds the page limit of an ICLR submission. I have a lot of questions about this dataset that could have been answered in the main text had more of the paper been given to actually describing the data. Such as:
- What are some actual examples of the different kinds of reasoning in table 2?
- How did you decide the proportions of questions listed in table 2? Was that manual?
- Where did the questions actually come from? Real exams? Practice exams? Which ones?
- What is the reasoning process that a person would actually use to answer some of these questions? Preferably answered for several of the question types that you listed. You have 8 pages; there's a lot of space that could be given to this.
- What kinds of things do you see in the distractors that make this hard?
I'm recommending a weak accept for this paper, but only because of my background knowledge about the LSAT and GMAT, and my prior understanding of reading comprehension and why this would be a really interesting dataset. I think the paper itself could do a *much* better job convincing a reader about the dataset than what is done here.
What should be cut to make room for this?
I think far too much space is given to the discussion of easy vs. hard (~2.5 pages). Given that the best performance is at ~54% on the full test set, there isn't a lot of need for this. The argument about four flips of a 25% coin giving a 0.39% chance of guessing right every time is only true if the coin flips are independently drawn; training the same pretrained model on the same data with a different random seed is clearly not independent, so this argument rings hollow. It's not really clear what to conclude from the easy vs. hard split, other than the models that you used to create it expectedly do well on the easy split and hard on the test split. You'd want to have separate models that create the split than what you're evaluating them with, but even then, you're training them on the same data, so it's still not really clear what to conclude. It's sufficient in a case like this to just evaluate as a baseline a model that shouldn't have access to enough information to solve the task, and measure its performance on the test set. That would have taken a few sentences to describe, instead of 2.5 pages.
You could also recover a lot of space by evaluating fewer baselines. Describing six baseline models and including all of their performance numbers takes up a lot of space, and we really don't need to know what all of those numbers are. One or two, along with question-only and option-only baselines, would be plenty. If you really want to include all of them, that's information that should go into the appendix, instead of something that's core to your paper's contribution, like describing the dataset that you're releasing.
Quality of baselines:
With a dataset this small, you definitely want to pretrain BERT / RoBERTa on a larger QA dataset before training on ReClor. I would guess that performance would be significantly higher if you used a RACE-trained model as a starting point, instead of raw BERT / RoBERTa. And if it doesn't end up helping, then you've made a stronger argument about how your dataset is different from what has come before, and a good target for future research. What I would recommend for table 6:
- chance
- option-only
- question-only
- raw XLNet or RoBERTa
- XLNet or RoBERTa pretrained on RACE, or SQuAD, or similar
And I would pick either XLNet or RoBERTa based on which one did the best after pre-training. That way, related to my first point, you can remove the description of all other models, and table 4, and free up a bunch of space for more interesting things.
As an aside, I think it's good that the dataset is relatively small, as it gives these over-parameterized models less opportunity to overfit, and it makes us come up with other ways of obtaining supervision. But you've got to be sure to use the best baselines available, and with small datasets, that means more pretraining on other things.
EDIT Nov. 19, 2019: The authors' revision has satisfied my concerns. I think that this is a good paper and it should be accepted. |
iclr_2020_BJlQtJSKDB | Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e.g., Computer Go). However, they generally require a large number of rollouts, making their applications costly. Furthermore, it is also extremely challenging to parallelize MCTS due to its inherent sequential nature: each rollout heavily relies on the statistics (e.g., node visitation counts) estimated from previous simulations to achieve an effective exploration-exploitation tradeoff. In spite of these difficulties, we develop an algorithm, WU-UCT, to effectively parallelize MCTS, which achieves linear speedup and exhibits negligible performance loss with an increasing number of workers. The key idea in WU-UCT is a set of statistics that we introduce to track the number of on-going yet incomplete simulation queries (named as unobserved samples). These statistics are used to modify the UCT tree policy in the selection steps in a principled manner to retain effective exploration-exploitation tradeoff when we parallelize the most time-consuming expansion and simulation steps. Experiments on a proprietary benchmark and the Atari Game benchmark demonstrate the linear speedup and the superior performance of WU-UCT comparing to existing techniques. | This paper introduces a new algorithm for parallelizing Monte-Carlo Tree Search (MCTS). Specifically, when expanding a new node in the search tree, the algorithm updates the parent nodes’ statistics of the visit counts but not their values; it is only when the expansion and simulation steps are complete that the values are updated as well. This has the effect of shrinking the UCT exploration term, and making other workers less likely to explore that part of the tree even before the simulation is complete. This algorithm is evaluated in two domains, a mobile game called “Joy City” as well as on Atari. The proposed algorithm results in large speedups compared to serial MCTS with seemingly little impact in performance, and also results in higher scores on Atari than existing parallelization methods.
Scaling up algorithms like MCTS is an important aspect of machine learning research. While significant effort has been made by the RL community to scale up distributed model-free algorithms, less effort has been made for model-based algorithms, so it is exciting to see that emphasis here. Overall I thought the main ideas in paper were clear, the proposed method for how to effectively parallelize MCTS was compelling, and the experimental results were impressive. Thus, I tend to lean towards accept. However, there were three aspects of the paper that I thought could be improved. (1) It was unclear to me how much the parallelization method differs from previous approaches (called “TreeP” in the paper) which adjust both the visit counts and the value estimate. (2) The paper is missing experiments showing the decrease in performance compared to a serial version of the algorithm. (3) The paper did not always provide enough detail and in some cases used confusing terminology. If these three things can be addressed then I would be willing to increase my score.
Note that while I am quite familiar with MCTS, I am less familiar with methods for parallelizing it, though based on a cursory Google Scholar search it seems that the paper is thorough in discussing related approaches.
1. When performing TreeP, does the traversed node also get an increased visit count (in addition to the loss which is added to the value estimate)? In particular, [1] and [2] adjust both the visit counts and the values, which makes them quite similar to the present method (which just adjusts visit counts). It’s not clear from the appendix whether TreeP means that just the values are adjusted, or both the values and nodes. If it is the former, then I would like to see experiments done where TreeP adjusts the visit counts as well, to be more consistent with prior work. (Relatedly, I thought the baselines could be described in significantly more detail than they currently are—-pseudocode would in the appendix would be great!)
2. I appreciate the discussion in Section 4 of how much one would expect the proposed parallelization method to suffer compared to perfect parallelization. However, this argument would be much more convincing if there were experiments to back it up: I want to know empirically how much worse the parallel version of MCTS does in comparison to the serial version of MCTS, controlling for the same number of simulations.
3. While the main ideas in the paper were clear, I thought certain descriptions/terminology were confusing and that some details were missing. Here are some specifics that I would like to see addressed, roughly in order of importance:
- I strongly recommend that the authors choose a different name for their algorithm than P-UCT, which is almost identical (and pronounced the same) as PUCT, which is a frequently used MCTS exploration strategy that incorporates prior knowledge (see e.g. [1] and [2]). P-UCT is also not that descriptive, given that there are other existing algorithms for parallelizing MCTS.
- Generally speaking, it was not clear to me for all the experiments whether they were run for a fixed amount of wallclock time or a fixed number of simulations, and what the fixed values were in either of those cases. The fact that these details were missing made it somewhat more difficult for me to evaluate the experiments. I would appreciate if this could be clarified in the main text for all the experiments.
- The “master-slave” phrasing is a bit jarring due to the association with slavery. I’d recommend using a more inclusive set of terms like “master-worker” or “manager-worker” instead (this shouldn’t be too much to change, since “worker” is actually used in several places throughout the paper already).
- Figure 7c-d: What are game steps? Is this the number of steps taken to pass the level? Why not indicate pass rate instead, which seems to be the main quantity of interest?
- Page 9: are these p-values adjusted for multiple comparisons? If not, please perform this adjustment and update the results in the text. Either way, please also report in the text what adjustment method is used.
- Figure 7: 3D bar charts tend to be hard to interpret (and in some cases can be visually misleading). I’d recommend turning these into heatmaps with a visually uniform colormap instead.
- Page 1, bottom: the first time I read through the paper I did not know what a “user pass-rate” was (until I got to the experiments part of the paper which actually explained this term). I would recommend phrasing this in clearer way, such as “estimating the rate at which users pass levels of the mobile game…”
- One suggestion just to improve the readability of the paper for readers who are not as familiar with MCTS is to reduce the number of technical terms in the first paragraph of the introduction. Readers unfamiliar with MCTS may not know what the expansion/simulation/rollout steps are, or why it’s necessary to keep the correct statistics of the search tree. I would recommend explaining the problem with parallelizing MCTS without using these specific terms, until they are later introduced when MCTS is explained.
- Page 2: states correspond to nodes, so why introduce additional notation (n) to refer to nodes? It would be easier to follow if the same variable (s) was used for both.
Some additional comments:
- Section 5: I’m not sure it’s necessary to explain so much of the detail of the user-pass rate prediction system in the main text. It’s neat that comparing the results of different search budgets of MCTS allow predicting user behavior, but this seems to be a secondary point besides the main point of the paper (which is demonstrating that the proposed parallelization method is effective). I think the right part of Figure 5, as well as Table 1 and Figure 6, could probably go in the supplemental material. As someone with a background in cognitive modeling, I think these results are interesting, but that they are not the main focus of the paper. I was actually confused during my first read through as it was unclear to me initially why the focus had shifted from demonstrating that parallel MCTS works to
- The authors may be interested in [3], which also uses a form of tree search to model human decisions in a game.
- Page 9: the citation to [2] does not seem appropriate here since AlphaGo Zero did not use a pretrained search policy, I think [1] would be correct instead.
[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484.
[2] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.
[3] van Opheusden, B., Bnaya, Z., Galbiati, G., & Ma, W. J. (2016, June). Do people think like computers?. In International conference on computers and games (pp. 212-224). Springer, Cham. |
iclr_2020_S1x6TlBtwB | Bayesian Neural Networks (BNNs) provides a mathematically grounded framework to quantify uncertainty. However BNNs are computationally inefficient, thus are generally not employed on complicated machine learning tasks. Deep Ensembles were introduced as a Bootstrap inspired frequentist approach to the community, as an alternative to BNN's. Ensembles of deterministic and stochastic networks are a good uncertainty estimator in various applications (Although, they are criticized for not being Bayesian). We show Ensembles of deterministic and stochastic Neural Networks can indeed be cast as an approximate Bayesian inference. Deep Ensembles have another weakness of having high space complexity, we provide an alternative to it by modifying the original Bayes by Backprop (BBB) algorithm to learn more general concrete mixture distributions over weights. We show our methods and its variants can give better uncertainty estimates at a significantly lower parametric overhead than Deep Ensembles. We validate our hypothesis through experiments like non-linear regression, predictive uncertainty estimation, detecting adversarial images and exploration-exploitation trade-off in reinforcement learning. | Summary: This paper proposes to use either relaxed mixture distributions or relaxed mixtures of matrix Gaussians as the approximate posterior for Bayesian neural networks. Naturally, taking the mixture variance to zero allows a stretched interpretation of ensembling to become Bayesian as well. Experiments are performed on small neural networks on regression tasks, as well as uncertainty estimation experiments on deep networks. Finally, a downstream task of bandit optimization (the mushroom experiment) is performed.
post-rebuttal: I'd like to sincerely thank the authors of the paper for their enlightening discussions about their work. I'm inclined to think that this paper has a future, and definitely encourage the authors to resubmit - taking into account the reviews here. In particular, it would be greatly helpful if they cleaned up some of the writing and explanation, as our discussion points to.
However, my stance on the codebase remains mostly unchanged - while a notebook ended up being released with their method, it was too close to the deadline and in a bit too rough of shape to really be able to poke through their implementation.
From a quick pass, their implementation seems to be correct - although rather than attempting to base their code on the old torch/lua implemenations, I'd still suggest using a pre-written version. However, without the completely trained numbers, I cannot in good conscience vote to accept the paper.
Tldr: I vote to reject this paper for several reasons. The interpretation of ensembling as variational inference is both flawed and well known. Mixture distributions have been previously proposed for variational inference. There are significant enough weaknesses in the empirical results that make me question the authors’ own implementations. I will increase my score if these issues are resolved.
Originality:
1) It is well known (and immediately obvious) that one can interpret ensembles as a Bayesian method – with a mixture of delta functions around the independently trained models. Furthermore, the Bayesian bootstrap (Rubin, 1983) is a Bayesian interpretation of the bootstrap (identical under many conditions), while re-weighted ensembles are Bayesian models (Newton, 1995; Newton, 2018) that converge to the true posterior under many conditions – intriguingly, this version of re-weighting could potentially have good uncertainty quantification properties. For the specific case of stacking (which is closely related to ensembling), there are interpretations of stacking as Bayesian model combination (Yao et al, 2018, Clyde & Iversen, 2013). As a result, this view of ensembling as a Bayesian method is not novel.
2) Additionally, the derivation of dropout as Bayesian inference (which relies on a similar interpretation on taking the variance parameter in the Gaussian to zero) is known to be flawed (Hron et al, 2018); this flaw is directly applicable to the interpretation of ensembling as variational inference as described in Section 3 of this paper. The issue is fundamental: when taking the variance parameter to zero, the supports of the variational distribution and the true distributions have a mismatch, causing the KL divergence to become infinite. As a result, one cannot minimize the objective (as it is infinite by design).
Clarity:
1) The proposed methods are clearly explained in the setting under which one might be able to re-implement them, particularly in the description of the approximate posteriors.
post rebuttal: to my knowledge, this was never really addressed. In particular, "why do we want to interpet ensembling in a Bayesian manner?" is still an open question to me.
2) Why is that we should expect (approximate) Bayesian procedures to be better calibrated than frequentist ones? It may be the case that if the integrals are accurately estimated (e.g. we have reached the true posterior and the model used for the data is in fact accurate), then we should expect Bayesian procedures to be well calibrated. However, this does not seem to be the case. A reference or argument for why (even in the model incorrect & approximate inference cases) in the introduction would greatly enhance the strength of the paper.
post rebuttal: I think that this issue was addressed in a reasonable manner through the rebuttal, but will require a bit of rewriting.
3) From a quick pass through Appendix A.2, the derivation is quite unstructured and difficult to follow. It is tough to see exactly what is being meant by the probabilities in the first line.
Additionally, the phrasing of “We further lower bound…” seems to suggest that the bound being used is not the well known bound of Durrieu et al, 2012, but a looser bound. If so, what is the approximation accuracy of this bound in relation to the other bounds compared against in this setting?
4) In sum, three different methods are proposed: concrete mixtures of Gaussians, concrete ensembles, and concrete mixtures of matrix variate Gaussians. Throughout the experiments, it seems like __one__ of these methods always wins on the task at hand. However, it does not always seem to be the same method. Is there a recommendation for practitioners as to which method to use – or is this simply task dependent? Specifically, is there an understanding as to why each method performs vastly differently on each task?
A priori, I would expect the concrete mixture of matrix variate Gaussians to always perform best because it does seem to be able to model multivariate posteriors the best. However, it seems to perform worse on several of the predictive uncertainty tasks.
5) For the uncertainty experiments, it would be nice to have more qualitative numbers – for example the expected calibration error – rather than just the plots on in and out of distribution example. After all, not only do we want to be able to recognize out of distribution examples, but we want to be able to trust the probabilities that are output.
6) In Figure 4, thank you for providing the adversarial attacks used to fool the networks – the step size parameter doesn’t give a great intuition, and the images are welcome and useful.
Significance:
post rebuttal: I think that this issue was satisfactorily addressed.
1) Mixture distributions as the variational family have already been proposed – at least once – see Miller et al, 2017 for an example that additionally uses the reparameterization trick. There, updates to the posterior are performed via selectively adding mixture components when necessary. Given that they also run experiments on neural networks, it would be nice to see a comparison to that method.
Quality:
1) I am very concerned by the performance of the ResNet20 models trained on CIFAR10 in the paper. The reported results for deep ensembles (an ensemble of _three_ independently trained models) in Appendix A are 85.23% accuracy. By comparison, the number in the original paper (He et al, 2015) is 91.25% for a _single_ model. Furthermore, the first link to a PyTorch repo from a Google search (https://github.com/akamaster/pytorch_resnet_cifar10) comes up with a re-implementation that gets 91.63% (again for a single model).
__This issue alone is enough for me to vote to reject the paper until the authors’ implementations are fixed and run with deep ensembled ResNets that get similar accuracy. I hope to see the implementation issues fixed in the rebuttal period. __
2) Furthermore, in Appendix 2, I see a potential issue in using the KL divergence and would like the authors to clarify.
If the secondary lower bound is being used to approximate the KL divergence between the mixture distributions, it becomes very unclear if what is being optimized is in fact a __lower bound__ on the log probability. After all, the standard ELBO is written as:
Log p(y) \geq ELBO := E_q(\log p(y | \theta)) – KL(q(\theta) || p(\theta))
Implying that another lower bound on the KL would be greater than the ELBO and thus not necessarily a lower bound on the log marginal likelihood.
3) Bottom of page 7: “The posterior is used only for fully connected layers…” Why is this the case? Alternative Bayesian methods can be used on the full network.
Minor Comments:
- Please attempt to use the math_commands.tex in the ICLR style file to write math. This makes the math standardized throughout.
- Figure 1 is especially unclear and the legends and captions should be made considerably larger. Zooming in at 800% seems to be necessary to even be able to make out which method is which. Consider, placing all methods on a single plot and then coloring/dashing them separately. Additionally, please compare to HMC on this task (and an equivalent RBF GP) to show the uncertainties from the “gold standard” methods.
- Please additionally make the figure legends larger for the rest of the figures.
- What is the meaning of time in Figure 2? I assume that it means training time, but it is not especially clear from the caption. With that being said, I think that it is quite interesting that the marginals do seem to look multi-modal, even at the end of training.
- Line before Eq 7: please use \citet if possible to cite Gupta & Nagar, rather than stating the clunky Gupta et al. \citep{Gupta & Nagar}.
- Eq 2: please do not use \hspace{-…} to save space in equations, so that the equation does not overwrite lines before.
- Page 2: “Deep Ensembles have lacked theoretical support …” In order to make this claim, you must first explain why one ought to be Bayesian in the first place. See above for the history of interpreting ensembling from a Bayesian perspective. There is no a priori reason why one would not just wish to have frequentist justifications of ensembling (and potentially even calibration).
- Please do not capitalize Ensembling or Variational Inference throughout.
References:
Clyde & Iversen, Bayesian model averaging in the M-open framework. In Bayesian Theory and Applications, 2013. DOI:10.1093/acprof:oso/9780199695607.003.0024
He, Zhang, Ren & Sun, Deep Residual Learning for Image Recognition, CVPR, 2016. https://arxiv.org/abs/1512.03385
Hron, Matthews & Ghahramani, Variational Bayesian dropout: pitfalls and fixes, ICML, 2018. https://arxiv.org/abs/1807.01969
Miller, Foti & Adams, Variational Boosting: Iteratively Refining Posterior Approximations, ICML, 2017. https://arxiv.org/abs/1611.06585
Newton & Raftery, Approximate Bayesian Inference with the Weighted Likelihood Bootstrap. JRSS:B, 1994. https://www.jstor.org/stable/2346025?seq=1#metadata_info_tab_contents
Newton, Polson & Xu, Weighted Bayesian Bootstrap for Scalable Bayes, 2018; https://arxiv.org/abs/1803.04559
Rubin, The Bayesian Bootstrap, 1981. Annals of Statistics. https://projecteuclid.org/euclid.aos/1176345338
Yao, Vehtari, Simpson, & Gelman, Using Stacking to Average Bayesian Predictive Distributions, Bayesian Analysis, 2018; https://projecteuclid.org/euclid.ba/1516093227 |
iclr_2020_BJl6t64tvr | A common belief in the machine learning community is that using adaptive gradient methods hurts generalization. We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years. We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings. We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning. Finally, we synthesize a "user's guide" to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings. | This paper revisited the a common belief that adaptive gradient methods hurts generalization performances. The authors re-examine this in more depth and provide a new set of experiments in larger-scale, state-of-the-art settings. The authors claimed that with proper tuning, the performance of adaptive optimizers can mitigate the gap with non-adaptive methods.
- It is great to have someone revisit and challenge the conventional ideas in the community. However, I did not find much insightful information in this paper. As the author mentioned, there are many recent works focusing on further improving the empirical generalization performances of adaptive gradient methods. The main aspects mentioned in the paper, \epsilon tuning, learning rate warmup and decaying schedule, are not something new and many of which are mentioned or used in the recent advances. This makes the contribution of this paper look like combining all the tricks together. The authors might want to carefully comment the differences with the recent advances
[1] Liu, Liyuan, et al. "On the variance of the adaptive learning rate and beyond." arXiv preprint arXiv:1908.03265 (2019).
[2] Loshchilov, Ilya, and Frank Hutter. "Decoupled weight decay regularization." ICLR 2019.
[3] Zaheer, Manzil, et al. "Adaptive methods for nonconvex optimization." Advances in Neural Information Processing Systems. 2018.
[4] Chen, Jinghui, and Quanquan Gu. "Closing the generalization gap of adaptive gradient methods in training deep neural networks." arXiv preprint arXiv:1806.06763 (2018).
[5] Luo, Liangchen, et al. "Adaptive gradient methods with dynamic bound of learning rate." arXiv preprint arXiv:1902.09843 (2019).
- In terms of tuning \epsilon, the authors mentioned that default setting in Tensorflow is 0.1 for Adagrad which is too high. However, most papers regarding adaptive gradient method usually set \epsilon as 10^-8. Pytorch set default value as 10^-10. In fact, Yogi paper mentioned above gives some different conclusions. In their experiments, they found that setting \epsilon to be a bit larger like 10^-3 give better results compared with 10^-8. I wonder if the authors examine the reasons for different conclusions here?
- at the end of page 6, missing reference for histogram |
iclr_2020_rylrI1HtPr | Single Image Super Resolution (SISR) has significantly improved with Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), often achieving order of magnitude better pixelwise accuracies (distortions) and state-of-the-art perceptual accuracy. Due to the stochastic nature of GAN reconstruction and the ill-posed nature of the problem, perceptual accuracy tends to correlate inversely with pixelwise accuracy which is especially detrimental to SISR, where preservation of original content is an objective. GAN stochastics can be guided by intermediate loss functions such as the VGG featurewise loss, but these features are typically derived from biased pre-trained networks. Similarly, measurements of perceptual quality such as the human Mean Opinion Score (MOS) and no-reference measures have issues with pre-trained bias. The spatial relationships between pixel values can be measured without bias using the Grey Level Co-occurence Matrix (GLCM), which was found to match the cardinality and comparative value of the MOS while reducing subjectivity and automating the analytical process. In this work, the GLCM is also directly used as a loss function to guide the generation of perceptually accurate images based on spatial collocation of pixel values. We compare GLCM based loss against scenarios where (1) no intermediate guiding loss function, and (2) the VGG feature function are used. Experimental validation is carried on X-ray images of rock samples, characterised by significant number of high frequency texture features. We find GLCM-based loss to result in images with higher pixelwise accuracy and better perceptual scores. | The paper considers the problem of generating a high-resolution image from a low-resolution one. The paper introduces the Grey Level Co-occurrence Matrix Method (GLCM) for evaluating the performance of super-resolution techniques and as an auxiliary loss function for training neural networks to perform well for super-resolution. The GLCM was originally introduced in a 1973 paper and has been used in a few papers in the computer vision community. The paper trains and validates a super-resolution GAN (SRGAN) and a super-resolution CNN (SRCNN) on the DeepRock-SR dataset. Specifically, for the SRGAN, the paper uses the EDSRGAN network trained on loss function particular to the paper: The loss function consists of the addition of the L1 pixel-wise loss plus the VGG19 perceptual objective plus the proposed GLCM loss.
The paper finds that SRCNN outperforms SRGAN in terms of the PSNR metric, but SRGAN performs better in terms of the spatial texture similarity.
Next, the paper shows that when trained with the mean L1-GMCM loss function, SRGAN performs best.
In summary, the paper proposes to use the GMCM loss for training and evaluation of the super-resolution methods. However, this metric is well known in the computer-vision community along with many others. Also, the idea to use this metric in training is only evaluated for one network (albeit a very sensible one) and only for one dataset (the DeepRock one). Since the novelty provided by the paper is small, I cannot recommend the acceptance of the paper. |
iclr_2020_rkehoAVtvS | Partial multi-label learning (PML), which tackles the problem of learning multilabel prediction models from instances with overcomplete noisy annotations, has recently started gaining attention from the research community. In this paper, we propose a novel adversarial learning model, PML-GAN, under a generalized encoder-decoder framework for partial multi-label learning. The PML-GAN model uses a disambiguation network to identify noisy labels and uses a multi-label prediction network to map the training instances to the disambiguated label vectors, while deploying a generative adversarial network as an inverse mapping from label vectors to data samples in the input feature space. The learning of the overall model corresponds to a minimax adversarial game, which enhances the correspondence of input features with the output labels. Extensive experiments are conducted on multiple datasets, while the proposed model demonstrates the state-of-the-art performance for partial multi-label learning. | This paper proposed a new method for partial multi-label learning, based on the idea of generative adversarial networks. The partial multi-label learning is the problem that one instance is associated with several ground truth labels simultaneously, but we are given a superset of the ground truth labels for training. To solve the problem, the paper proposes a learning model composing of four deep neural networks, two of them for prediction the ground truth labels (the prediction network, and the disambiguation network), one of them to generate instances based on the disambiguated labels, and one discriminative network to discriminated generated instance versus true instances. The four network are concatenated, and the paper proposed to optimize the sum of generation loss, the classification loss, and the adversarial loss. Theoretical results like the theories in the original GAN paper are also given. Finally, the paper does thorough empirical studies compared the proposed method with four other baselines and three variants on fourteen datasets with various settings.
The paper is the first paper to proposes a GAN-style algorithm for the partial multi-label learning problem. The empirical results also validate its superior performance compared to other baselines on the data sets used. On the other hand, the paper fails to give enough intuition on why the GAN can naturally be used to solve the partial multi-label learning problem, and why the proposed method can perform better than other baselines. There is also some inaccuracy in discussing related work. Overall, given the contribution to introduce GAN to partial multi-label learning problem and the thorough empirical studies, I think this paper can be weakly accepted.
Main arguments
The paper argues that existing methods may not be good on this problem is due to the fact that they generally learn a “confidence score” for every candidate labels, while the learned “confidence score” may not be accurate enough, and the errors may accumulate through these process. I agree with the paper on this aspect. However, in the paper, it also learns something similar to the “confidence” score in the disambiguation network \tilde D, so why the proposed method is better than previous baselines? I guess the reason is due to the adversarial training part in the latter steps. But the paper fails to give clear intuition why the adversarial training part can lead to better performance.
I am also wondering will the proposed model leads to a trivial solution. For example, one possible solution may be that both the prediction network F and the disambiguation network \tilde D get the same solution: the given partial label set. In this way, the classification error can always be zero, and we can train the generation network to give a ground truth instance x, given that the generation network G is strong enough. So I also guess the discrimination network is the key part for the success of the proposed method empirically. So I am curious if we only use the adversarial loss without training the prediction network F, what the results will be. In the Ablation Study Sec. 5.3, the paper does not compare the results optimizing only the adversarial loss. I believe the results of such a study can help us understand what the key part of the proposed method is.
In the related work part, the arguments on “weak label learning” cannot be used for partial multi-label learning is not accurate. In fact, “weak label learning” studies the problem when positive labels are missing, and partial multi-label learning studies the problem when negative labels are missing. So in general, the methods for “weak label learning” can be used for “partial multi-label learning” if we exchange the role of positive labels and negative labels in both problems. So why we need to study partial multi-label learning? I think the reason is that, generally, in multi-label learning, the number of negative labels will be much larger than the number of positive labels. So “partial multi-label” is actually more strong supervision information than “weak labels” because positive labels are more important. Such arguments and related empirical studies (when the number of negative labels is much larger than positive labels) are suggested to be added into the further version of the paper.
Although the paper has some deficiencies, it does a good job of introducing GAN into partial multi-label learning, and it is also the first paper to use the powerful deep neural networks into this problem, which may trigger some interesting studies given the booming of neural networks these days. And thorough empirical studies are done by comparing to not only baselines but also different loss parts of the proposal it owes. It is worth reading especially it may have an impact on further studies.
---------------------------------------
I am satisfied with the rebuttal and tend to increase my score. Although the paper is not perfect, it does a good job of introducing GAN into partial multi-label learning, and it is also the first paper (thus it is novel) to use the powerful deep neural networks into this problem, which may trigger some interesting studies given the booming of neural networks these days.
Actually, I think partial multi-label learning problem is difficult and it may not possible to have perfect solution nowadays. Partial multi-class learning may be easier since you know only one label is true, but partial multi-label learning is difficult since you do not know how many true labels there are. It makes no point to require a perfect solution for such a dirty problem. |
iclr_2020_Syg6fxrKDB | We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem (TSP). We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively. A graph neural network (GNN) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step. The prior probability provides a heuristic for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure. Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods. | EDIT: After the authors response and update of the contributions to indicate that the main contribution of the paper is the application of GNNs and MCTS to the TSP (rather than the original claims that that the model architecture and search approach were novel contributions), I increased my score from Weak Reject to Weak Accept. However, given that the paper is now more focused on solving the TSP, and I am not an expert on that specifically I had to reduce my experience assessment, as while I am more confident now that the paper is technically correct, it is harder for me to judge if the paper should be accepted in terms of empirical strength since I am not familiar with TSP baselines.
The authors propose an MCTS-based learned approach using Graph Neural Networks to solve the traveling salesman problem (TSP) agents.
The authors write the TSP as an MDP where the state consists of the nodes visited by the agent and the last node visited by the agent, the action consists of selecting the next node to visit, and the reward at each step is the negative cost of the travel between the last node and the next node.
The learned part of the model uses a “static-edge graph neural network” (SE-GNN). This network allows to access the full graph context, including edge features, to make node predictions. This is listed as the first paper contribution. At train time, this network is trained to predict the probability of each unvisited node to be next in the optimal path. This is trained via supervised learning using optimal paths precomputed with state of the art TSP solvers.
At test time, they use MCTS with a variant of PUCT, where the pre-trained SE-GNN is used as the prior policy, and there is a selection strategy during search that balances the prior probability, and the Q values estimated by MCTS, using max based updates (e.g. during back up new Q estimates replace old estimates if and only if the are larger than the previous ones). This is listed as the second paper contribution. Authors show that the approach beats other learned solvers in the TSP problem by a large margin in terms of optimality gap.
While I think the work is interesting, I am not sure that what the authors cite as main contributions of the paper are truly the main contributions. In my opinion the main contribution would be the state of the art performance at solving the TSP using learned methods. I cannot, however, recommend acceptance due to the following reasons.
With respect to the first claim “SE-GNN that has access to the full graph context and generates the prior probability of each vertex”, there are already many models that allow to condition on edge features, including InteractionNetworks, RelationNetworks and GraphNetworks. This paper has a good overview of this family of methods and most of them allow to access the full graph context too (https://arxiv.org/abs/1806.01261). Most of these models are very well known and are in principle more expressive than the one proposed in this paper, and allow generalization to different graph sizes, so the motivation for introducing a new model is not very clear, specially if these baselines are not compared.
With respect to the MCTS contribution at test time, it seems that the changes made to the algorithm compared to AlphaGo, are very specific to the TSP and there is not much discussion about which other sort of problems may benefit from the same modifications, so it is hard to evaluate its value as a standalone contribution independent from the TSP.
On the basis of state of the art performance at solving the TSP using learned methods:
* The model requires access to a dataset with optimal solutions to train it, and I doubt it can solve the problems faster than Gurobi in terms of wall time. For this result to be more interesting, the authors should be able to show that the model can generalize to larger problems (where the combinatorial complexity may start making approaches like Gurobi struggle). However it is not clear if the model can generalize to larger graphs.
* Beyond that I am not an expert on TSP specifically, and I don’t know the TSP literature, so I cannot give a strong recommendation.
There are some additional papers that may be relevant to this line of work:
* (MIP, NeurIPS 2019) Learning to branch in MIP problems using similar technique pretraining a GNN and use it to guide a solver at test time (no MCTS though) (https://arxiv.org/abs/1906.01629)
* (SAT, SAT Conference 2019) Learning to predict unsat cores (similar to the previous one but for SAT problems) (https://arxiv.org/abs/1903.04671)
* (Structural construction, ICML 2019) Building graphs by choosing actions over the edges of a graph solving the full RL problem end to end, and also integrating MCTS with learned prior both at train time and test time (together and independently) (http://proceedings.mlr.press/v97/bapst19a/bapst19a.pdf)
Some additional typos/ feedback:
* It would be good to have a pure MCTS baseline with not learned prior as an additional ablation (e.g. taking the SE-GNN prior out of the picture).
* In the “Selection Strategy” paragraph, the action is said to be picked as argmax(Q + U), where U is proportional to the prior for each action. However, Q is said to be initialized to infinite. This would mean that at the beginning of search all actions will be tied at infinite value, and my default assumption would be that in these conditions an action is chosen uniformly at random. I suspect what happens in this case is that the action with the highest prior is picked to break the tie at infinite, however if this is the case this should be indicated in the math.
* In the “Expansion Strategy” paragraph, the Q values are said to be initialized to infinite. However in the Back-Propagation strategy it is said they are updated using newQ = max(oldQ, value_rollout). If this was true the values would always remain infinite, I assume the max is not applied if the previous value was still infinite.
* In the “Play” paragraph: The action is said to be picked according to the biggest Q value at the root, I assume in cases where the planning budget is smaller than the number of nodes, and not all actions at the root are explored, the actions that have not been explored are masked out.
* Non-exhaustive list of typos: “Rondom” —> “Random”, “provides a heuristics” —> “provides a heuristic”, “strcuture2vec” — > “structure2vec”, weird line break at top of page 8. |
iclr_2020_B1lda1HtvB | Feature selection problems have been extensively studied in the setting of linear estimation, for instance LASSO, but less emphasis has been placed on feature selection for non-linear functions. In this study, we propose a method for feature selection in non-linear function estimation problems. The new procedure is based on directly penalizing the 0 norm of features, or the count of the number of selected features. Our 0 based regularization relies on a continuous relaxation of the Bernoulli distribution, which allows our model to learn the parameters of the approximate Bernoulli distributions via gradient descent. The proposed framework simultaneously learns a non-linear regression or classification function while selecting a small subset of features. We provide an information-theoretic justification for incorporating Bernoulli distribution into our approach. Furthermore, we evaluate our method using synthetic and real-life data and demonstrate that our approach outperforms other embedded methods in terms of predictive performance and feature selection. | The author rebuttal sufficiently addresses my concerns, so I am upgrading my score.
***
The paper considers the problem of embedded feature selection for supervised learning with nonlinear functions. A feature subset is evaluated via the loss function in a "soft" manner: a fraction of an individual feature can be "selected". Sparsity in the feature selection is enforced via a relaxation of l0 regularization. The resulting objective function is differentiable both in the feature selection and learned function making (simultaneous) gradient-based optimization possible. A variety of experiments in several supervised learning tasks demonstrates that the proposed method has superior performance to other embedded and wrapper methods.
My decision is to reject, but I'm on the fence regarding this paper. I'm not clearly seeing the motivation for an embedded feature selection method for neural network models: for the datasets considered in the paper, it would seem that training a nonlinear model that used all the features would result in performance at least as good as training the nonlinear model with a prepended STG layer. Perhaps there is evidence that filtering features, e.g., irrelevant features, results in higher accuracy and that the prepended STG layer achieves this accuracy, but that evidence is missing from the paper. Also, there could be downstream computational savings, e.g., at prediction time, if the dimension was very large, but this is not the setting tested in the experiments. I suppose interpretability could be considered motivation, but, even so, isn't there at least one simpler, deterministic approach (described below) that also "solves" the problem? Finally, it isn't clear how the method scales with increasing sample size and dimension as all the datasets tested are relatively small in these respects.
***
Questions and suggestions related to decision:
* The performance values using all features should be included in the experimental results so that the value added by STG can be assessed.
* Why not use the simpler deterministic and differentiable relaxation z = \sigma(\mu), where \sigma() is a "squashing" function from the real numbers to [0,1] applied element-by-element to the vector \mu? What specifically is/are the advantage(s) that the randomness in the definition of z at the bottom of pg. 3 provide over this deterministic alternative?
* Though well-described and methodologically rigorous, the experimental comparison is none-the-less a little disappointing: one dataset for classification and half the datasets for regression are synthetic and low-dimensional. The remaining regression datasets are real but also low-dimensional. The survival analysis dataset is also low-dimensional (as described in the supplementary material). This leaves one real classification dataset which was on the order of 20,000 examples and 2500 features. Why were larger sample-size and dimensionality datasets not tested? These should be readily available. For example, the gisette dataset from the NIPS 2003 feature selection challenge has 5000 features. See "MISSION: Ultra Large-Scale Feature Selection using Count-Sketches" by Aghazadeh & Spring et al. (2018) for other high-dimensional datasets. Even a single run for each large dataset would have provided some evidence of scalability.
***
Other minor comments not related to decision:
* "Concrete Autoencoders for Differentiable Feature Selection and Reconstruction" by Abid et al. (2019) targets unsupervised feature selection but has enough similarities in the approach that it should be considered related work.
* [Typo?] The unnumbered equation after (5) should not have a sum over d in the second term. Perhaps a sum over k was intended? Also, in this equation, the gradient of the loss wrt/ z samples, average of gradients over z samples times..., does not seem to match what the gradient would be given the algorithmic description in the supplementary material, a gradient of the (sample) average z times...
* The abstract states the paper is proposing a method for high-dimensional feature selection, but all of the experiments have datasets with max. dimensionality 2538.
* Some discussion of how the regularization parameter can be selected by a user of the proposed method would be good to include. |
iclr_2020_BJx3_0VKPB | There are concerns that neural language models may preserve some of the stereotypes of the underlying societies that generate the large corpora needed to train these models. For example, gender bias is a significant problem when generating text, and its unintended memorization could impact the user experience of many applications (e.g., the smart-compose feature in Gmail). In this paper, we introduce a novel architecture that decouples the representation learning of a neural model from its memory management role. This architecture allows us to update a memory module with an equal ratio across gender types addressing biased correlations directly in the latent space. We experimentally show that our approach can mitigate the gender bias amplification in the automatic generation of articles news while providing similar perplexity values when extending the Sequence2Sequence architecture. | This paper focuses on the timely and important topic of unintended social bias in language models and proposes an approach to mitigate gender bias in the automatic generation of news articles. The automatic text generation is done by a sequence-to-sequence-model using word embeddings. The pattern of these embeddings in an multi-dimensional space can be associated with the stereotypes found in the corpora the models are trained on. To mitigate this bias the authors combine the network with a memory module storing additional gender information. This approach is inspired by the memory module used by Kaiser et al. (2017) to remember rare events, replacing the vector to track the age of items with a key representing the gender.
The proposed network architecture ("Seq2Seq+FairRegion") is evaluated using the bias score presented by Zhao et al. (2017) and compared to the sequence-to-sequende-models "Seq2Seq2" by Sutskever, Vinyals and Lee (2014) and "Seq2Seq+Attention" by Bahdanau, Cho and Bengio (2015).
Besides some typographical errors ("work embedding") the paper is overall well written but unfortunately the flow of the text could be improved to make it easier to follow. The authors should perhaps consider to explain the structure of the paper in the introduction. There are some errors according the brackets of the in-text citations which should be corrected.
The two figures both illustrate the memory module and could be easily combined to one because the second figure just adds the encoding stage of the architecture. The caption of figure 1 could be used to further explain the "Fair Region" in the text (section 2). In section 4 the authors explain how to compute the bias score. Because these calculations are used to evaluate their methods, this section should perhaps be a subsection to section 5 ("experiments").
To evaluate the proposed method the authors use newspaper articles from Chile, Peru and Mexiko and compare the bias amplification metric for the trained models. The results are shown for the datasets from Peru and Mexico (Chile is missing) with a larger bias amplification for the Peru dataset. It could be justified more clearly why the experiments compare the bias amplification scores of different countries while the main focus of the paper is to mitigate gender bias in general.
While this is an important topic, the contribution is rather minor and not well presented. Specifically the experiements should be extended. |
iclr_2020_S1xO4xHFvB | Compressed forms of deep neural networks are essential in deploying large-scale computational models on resource-constrained devices. Contrary to analogous domains where large-scale systems are build as a hierarchical repetition of smallscale units, the current practice in Machine Learning largely relies on models with non-repetitive components. In the spirit of molecular composition with repeating atoms, we advance the state-of-the-art in model compression by proposing Atomic Compression Networks (ACNs), a novel architecture that is constructed by recursive repetition of a small set of neurons. In other words, the same neurons with the same weights are stochastically re-positioned in subsequent layers of the network. Empirical evidence suggests that ACNs achieve compression rates of up to three orders of magnitudes compared to fine-tuned fully-connected neural networks (88× to 1116× reduction) with only a fractional deterioration of classification accuracy (0.15% to 5.33%). Moreover our method can yield sub-linear model complexities and permits learning deep ACNs with less parameters than a logistic regression with no decline in classification accuracy. | This paper proposes a new way to create compact neural net, named Atomic Compression Networks (ACN). An immediate related work is LayerNet, where a deep neural net is created by replicating the same layer. Here, this paper extends replication down to the neuron level.
I am leaning towards rejecting this paper because the experimental setup is not well justified and a few important details are missing before conclusions can be drawn. I would like to ask a few clarification questions. Depending on the authors’ answers, I might be willing to adjust my rating.
(1) Is there missing a delta in the first half of line 6 in Algorithm 1?
(2) Throughout the experiments, for the same hyperparameter (e.g. Table 4 in A.2) do you run Algorithm 1 more than once and select the best sample architecture? If the answer is yes, summarizing all masks as one parameter will not be reasonable. Given a yes answer, I would also like to ask if the same number of samples have been considered for FC (for the same hyperparameter).
(3) Is there any intuition behind why FC does a much worse job of fitting curves than ACN with much less parameters? This refers to Fig. 2, if we compare FC with 41 parameters to ACN with 18 parameters. I am confused because MSE on sampled points often goes down when we increase the number of parameters for the application of curve fitting.
(4) Convolution can be thought of as a special case of ACN. ConvNet is the default architecture for working on image datasets. Since MNIST and CIFAR are considered, why not also compare to ConvNet?
(5) The claims that “ACNs achieve compression rates of up to three orders of magnitudes compared to fine-tuned fully-connected neural networks with only a fractional deterioration of classification accuracy” is quite misleading. Given fully-connected neural networks achieve up to 528 times with also a fractional deterioration (Sec. 4.3), by presumably having a shallower architecture. |
iclr_2020_r1g1CAEKDH | A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks. The proposed Wyner VAE model is based on two information theoretic problems-distributed simulation and channel synthesis-in which Wyner's common information arises as the fundamental limit of the succinctness of the common representation. The Wyner VAE decomposes a pair of correlated data variables into their common representation (e.g., a shared concept) and local representations that capture the remaining randomness (e.g., texture and style) in respective data variables by imposing the mutual information between the data variables and the common representation as a regularization term. The utility of the proposed approach is demonstrated through experiments for joint and conditional generation with and without style control using synthetic data and real images. Experimental results show that learning a succinct common representation achieves better generative performance and that the proposed model outperforms existing VAE variants and the variational information bottleneck method.
shared common randomness. (See Fig. 1 (c,d).) In this sense, the joint distribution q(x, y) and the conditional distributions q(y|x), q(x|y) have the same common information structure characterized by the optimization problem (1), which involves learning the joint distribution in its nature. We call the joint encoder q(z|x, y) (or equivalently, the corresponding random variable Z) as the common representation of (X, Y), and the mutual information I q (X, Y; Z) then can be viewed as a measure of the complexity of Z.
The goal of this paper is to propose a new probabilistic model based on these information theoretic observations, to achieve a good performance in joint and conditional generation tasks. We apply the idea of learning succinct common representation to design a new generative model for a pair of correlated variables: seeking a succinct representation Z in learning the underlying distribution based on its sample may also help reduce the burden on the decoder's side and thereby achieve a better generative performance.
The rest of the paper gradually develops our framework as follows. We first define a probabilistic model based on the motivating problems, which we aim to train and use for generation tasks (Section 2.1), and then establish a general principle for learning the model based on the optimization problem (1) (Section 2.2). We propose one instantiation of the principle with a standard variational technique by introducing additional encoder distributions (Section 2.3). The proposed model with its training method can be viewed as a variant of variational autoencoders (VAEs) Rezende et al., 2014), and is thus called Wyner VAE. (See Appendix A for a brief introduction on VAEs.) The new encoder components introduced in Wyner VAE allow us to decompose a data vector into the common representation and the local representation, which can be used for sampling with style manipulation (Section 2.4). We carefully compare our model and show its advantages over the existing VAE variants (Vedantam et al., 2018;Suzuki et al., 2016;Sohn et al., 2015;Wang et al., 2016) and the information bottleneck (IB) principle (Tishby et al., 1999) (Section 3), which is a well-known information theoretic principle in representation learning. In the experiments, we empirically show the utility of our model in various sampling tasks and its superiority over existing models and that learning a succinct common representation achieves better generative performance in generation tasks (Section 4). | This paper presents a method for learning latent-variable models of paired data that decomposes the latent representation into common and local representations. The approach is motivated by Wyner’s common information, and is made tractable through a variational formulation. On experiments with MoG, MNIST, SVHN, and CelebA, the paper shows that penalizing the complexity of the common representation can improve style-content disentanglement and conditional sampling.
While I found the formulation and motivation from Wyner’s common information and Cuff’s channel synthesis interesting, the resulting model and experiments were unconvincing. There are many terms in the Wyner model proposed, and there are no ablations demonstrating which terms are important and which are not (e.g. do you need both L_xx and L_xy?). On the toy MoG experiment, the common information is known but trained models do not recover the right amount of information. For representation learning, accuracy decreases as the penalty on the common information increases, and for NLL, the joint and conditional NLL is often similar to existing work (CVAE and JVAE). The main win appears to be for style-content disentanglement, but the results there are qualitative and often change the content when only style is changed. It’s also puzzling as to why CVAE overfits so severely in a subset of the experiments. Without a more thorough evaluation of what terms in the loss matter, and showing that the technique recovers something like Wyner’s common information (by extracting the right information on a toy model), I cannot recommend this paper for acceptance.
Minor comments:
* Eqn 1: the constraint “Subject to X-Z-Y” is confusing. Do you mean X-Z-Y is a Markov network (undirected) and not a Markov chain (directed), in which case X -> Z <- Y does not correspond to X-Z-Y? (This is addressed in Eqn 3-4, but should be fixed here)
* How is Wyner’s common information related to the multivaraite mutual information I(X; Z; Y)?
* When describing distributed simulation, please include the variables and objective like you do for common information
* s/Markovity condition/independence assumption?
* Why are both losses in Eqn 4 and Eqn 5 needed? Couldn’t you use either to train q(z|x)?
* Eqn 10 (and most of your objectives) are still intractable due to the q(x)/q(x,y) terms, you should note that it is constant and dropped from the objective
* “Style control”: could you define what you mean by style in this context? Prior work could likely also do “style control” by e.g. interpolating subsets of dimensions.
* Related work: https://openreview.net/forum?id=rkVOXhAqY7 (CEB) that may result in a similar objective as Wyner’s common info
* When comparing to JVAE/JMVAE, it seems like the main difference is suing a latent-variable in the decoder, but the framework is still the same. It’d be useful to spend more time comparing/contrasting with this prior work.
* Fig 4: would be useful to have a picture of the samples (right now they’re just in appendix)
* Fig 4: in the data generating process, I believe I((X, Y); Z) = ln(5) = 1.6 nats, but none of your models converge to rates around there. Why not?
* Unlike other approaches (JVAE, CVAE) you have additional terms in your loss for conditional prediction at training time. It seems like these may be giving you gains, and it’d be useful to perform ablations over the terms in Eqn 15 to figure out what is impacting performance in Fig 4. E.g. if you set \alpha_{x -> y} and \alpha{y -> x} to 0
* Why do the CVAE models overfit? Have you tried optimizing the variational parameters on the test set if it’s due to amortization gap?
* Fig 5b: what’s the variance you’re plotting? In representation learning, we often just care about downstream accuracy, and it looks like \lam = 0 performs best there.
* Table 3: isn’t this showing that \lam = 0 works the best for joint NLL and close to best for conditional NLL on the MoG task? What’s the discrepancy with Fig 5 where conditional NLL is highly dependent on \lambda for MNIST?
* Fig 6 (a1-f1): for this MNIST add 1 dataset, the only common info should be the label. But it looks like the labels do change for non-zero values of lambda (b1 there’s a 2 -> 7), c1 1 -> 9)
* Table 4: would be useful to train CVAE yourself, as the small differences in numbers could just be due to tuning / experimental setup. You also should bold everything that’s within stderr (i.e. \lambda = 0 and \lambda = 0.15 are equally good)
* Fig 9: would be useful to include CVAE, and Wyner VAE in this plot, i.e. is it only using the mode information in the latent variable when doing conditional sampling?
* CelebA results look interesting, but there’s been other work on generation from attributes that presents more visually compelling results, e.g. https://arxiv.org/abs/1706.00409 |
iclr_2020_BJlrF24twB | Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep-learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naïvely, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BACKPACK 1 , an efficient framework built on top of PYTORCH, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization. | This is a good paper. The authors present a software implementation which allows one to extend PyTorch to compute quantities that are irritatingly difficult to compute in PyTorch directly, or in other automatic differentiation frameworks, particularly if efficiency is a concern. Issues of this kind have been discussed at length within the community, particularly on GitHub, and related issues with optimization of automatic differentiation code have motivated other software developments, such as Julia's Zygote package. Having wasted a large amount of my time implementing the WGAN gradient penalty from scratch - which, to implement scalably, requires one to use both forward-mode and reverse-mode automatic differentiation simultaneously - I appreciate what the authors are trying to do here to make research that requires more sophisticated automatic differentiation more accessible.
---
Detailed remarks below.
* The paper's title is not very informative about what the paper is about. The authors should choose a different title - perhaps something like "BackPACK: user-friendly higher-order automatic differentiation in deep networks" or something similar but less long.
* The authors focus on KFAC-type methods as their key illustrated use case, but actually software frameworks like this are also helpful for certain GAN losses - WGAN-GP in particular. These losses require one to compute a term that involves the norm of a certain gradient of the output. The gradient of this gradient can be handled efficiently with Hessian-Vector products, which in principle are computable efficiently via automatic differentiation, but in practice a huge pain because of the need to use both forward-mode and reverse-mode automatic differentiation and lack of first-class support from automatic differentiation packages. Provided I've not misunderstood BackPACK and that it would help in making such computations less tedious (and I can't verify this myself, as my own implementation of WGAN-GP was not in PyTorch), I would highly recommend the authors to add an extra paragraph discussing this particular use case, because this would increase the paper's impact on the community by connecting it to another literature which is not mentioned in the paper.
* The entire paper could do with talking about backpropagation less, and automatic differentiation more, because it illustrates that the concerns addressed are not solely limited to deep networks, even if the package does primarily target them.
* P2: these issues are not limited to the Python community, and specialized automatic differentiation software has also been developed for Julia. The authors should cite Innes [1,2] and related papers from that literature.
* Figure 1: from the sample code, I worry about how generic BackPACK is. I think the package authors should be careful not to specialize too much to particular kinds of deep networks, particularly since a much wider variety of models and network architectures are starting to be used with automatic differentiation.
* P2: capital \Omega notation is confusing, please replace with capital \Theta.
* P2: L_2 regularization should more technically be \ell^2 instead.
* P4: please cite Baydin [3] who provides a very nice review of automatic differentiation. It may help explanation to introduce dual numbers, which make forward-mode much easier to understand.
* P6: please write out "with respect to" for "w.r.t.".
* P7: I really liked this section. The simplicity of implementing the example method using the authors' software framework feels compelling to me. However, genericness is still a concern: by analogy, every deep learning framework can do MNIST easily, but some of them make it much harder to do customized or advanced implementation than others. The latter cases are often the ones that matter to practitioners. It's hard to tell how easy it will be to implement something the authors did not foresee or consider - but this will necessarily be the case in any software paper.
* P8: "and in part driven" - missing a comma.
* P8: please spell out "Table" in "Tab. 1".
[1] M. Innes. Don't Unroll Adjoint: Differentiating SSA-Form Programs. NeurIPS, 2018.
[2] M. Innes. Flux: Elegant Machine Learning with Julia. Journal of Open Source Software, 2018.
[3] Baydin, Atilim Gunes and Pearlmutter, Barak A and Radul, Alexey Andreyevich and Siskind, Jeffrey Mark. Automatic differentiation in machine learning: a survey. JMLR, 2017. |
iclr_2020_S1g2skStPB | Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint. | In this paper, the authors propose an RL-based structure searching method for causal discovery. The authors reformulate the score-based causal discovery problem into an RL-format, which includes the reward function re-design, hyper-parameter choose, and graph generation. To my knowledge, it’s the first time that the RL algorithm is applied to causal discovery area for structure searching.
The authors’ contributions are:
(1) re-design the reword function which concludes the traditional score function and the acyclic constraint
(2) Theoretically prove that the maximizing the reward function is equivalent to maximizing the original score function under some choices of the hyper-parameters.
(3) Apply the reinforce gradient estimator to search the parameters related to adjacency matrix generation.
(4) In the experiment, the authors conduct experiment on datasets which includes both linear/non-linear model with Gaussian/Non-gaussian noise.
(5) The authors public their code for reproducibility.
Overall, the idea of this paper is novel, and the experiment is comprehensive. I have the following concerns.
(1) In page 4 Encoder paragraph, the authors mention that the self-attention scheme is capable of finding the causal relationships. Why? In my opinion, the attention scheme only reflects the correlation relationship. The authors should give more clarifications to convince me about their beliefs.
(2) The authors first introduce the h(A) constraint in eqn. (4), and mentioned that only have that constraint would result in a large penalty weight. To solve this, the authors introduce the indicator function constraint. What if we only use the indicator function constraint? In this case, the equivalence is still satisfied, so I am confused about the motivation of imposing the h(A) constraint.
(3) In the last paragraph of page 5, why the authors adjust the predefined scores to a certain range?
(4) Whether the acyclic can be guaranteed after minimizing the negative reward function (the eqn.(6))? I.e., After the training process, whether the graph with the best reward can be theoretically guaranteed to be acyclic?
(5) In section 5.3, the authors mention that the generated graph may contain spurious edges? Whether the edges that in the cyclic are spurious? Whether the last pruning step contains pruning the cyclic path?
(6) In the experiment, the authors adopt three metrics. For better comparison, the author should clarify that: the smaller the FDR/SHD is, the better the performance, and the larger the TPR is, the better the performance.
(7) From the experimental results, the proposed method seems more superiors under the non-linear model case. Why? Could the authors give a few sentences about the guidance of the model selection in the real-world? i.e., when to select the proposed RL-based method? And under which case to choose RL-BIC, and which case to selection RL-BIC2?
(8) What’s training time, and how many samples are needed in the training process?
Minor:
1. In the page 4 decoder section, the notation of enc_i and enc_j is not clarified.
2. On page 5, the \Delta_1 and \Delta_2 are not explained.
3. For better reading experience, in table 1,2,3,4, the authors should bold value that has the best performance. |
iclr_2020_rke3OxSKwr | Neural sequence-to-sequence models are at the basis of state-of-the-art solutions for sequential prediction problems such as machine translation and speech recognition. The models typically assume that the entire input is available when starting target generation. In some applications, however, it is desirable to start the decoding process before the entire input is available, e.g. to reduce the latency in automatic speech recognition. We consider state-of-the-art "wait-k" decoders, that first read k tokens from the source and then alternate between reading tokens from the input and writing to the output. We investigate the sensitivity of such models to the value of k that is used during training and when deploying the model, and the effect of updating the hidden states in transformer models as new source tokens are read. We experiment with German-English translation on the IWSLT14 dataset and the larger WMT15 dataset. Our results significantly improve over earlier state-of-the-art results for German-English translation on the WMT15 dataset across different latency levels. | This paper extends the idea of prefix-to-prefix in STACL and proposes two different variations. The authors did some interesting experiments between caching and updating decoder.
My questions are as follows:
1) in section 3.1.2, the authors mentioned two adaptations. Is the proposed AR encoder uni-directional? If the AR encoder is uni-directional, then I would be surprised that the uni-directional encoder outperforms the bi-directional encoder in the original wait-k model. For the second bullet, I think the original wait-k also did the same thing(they mentioned this in the paper clearly). So there is nothing new about bullet 2.
2) the idea mentioned in fig 1 is very similar to [1]. I suggest the authors compare with the aforementioned methods.
3) updating the hidden state of the decoder introduces more complexity during the inference time. I recommend the authors to perform some analysis about decoding time with CPU and GPU.
4) it is also interesting to show more comparison between different models' training time with the original STACL.
[1] Zheng et al. "Simultaneous Translation with Flexible Policy via Restricted Imitation Learning" ACL 2019 |
iclr_2020_BylTta4YvB | Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the c-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the c-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not perform the best in the generative setting.
values. The discriminators are functions from the sample space to the real line, that have different interpretations in different GAN variations. The main technical issue is that the discriminators have to satisfy specific conditions, such as being 1-Lipschitz in the 1-Wasserstein case. In the original paper, this was enforced by clipping the weights of the discriminator to lie inside some small box, which, however, proved to be inefficient. The Gradient penalty WGAN (WGAN-GP) (Gulrajani et al., 2017) was more successful at this, by enforcing the constraint through a gradient norm penalization. Another notable improvement was given by the consistency term WGAN (CT-WGAN) (Wei et al., 2018), which penalizes diverging from 1-Lipschitzness directly. Other derivative work of the WGAN include different OT inspired similarity measures between distributions, such as the sliced Wasserstein distance (Deshpande et al., 2018), the Sinkhorn divergence (Genevay et al., 2018) and the Wasserstein divergence (Wu et al., 2018). Another line of work studies how to incorporate more general ground cost functions than the l 2 -metric (Adler & Lunz, 2018;Mallasto et al., 2019;Dukler et al., 2019).
Recent works have studied the convergence of estimates of the Wasserstein distance between two probability distributions, both in the case of continuous (Klein et al., 2017) and finite (Sommerfeld & Munk, 2018;Sommerfeld, 2017) sample spaces. The decay rate of the approximation error of estimating the true distance with minibatches of size N is of order O(N −1/d ) for the Wasserstein distances, where d is the dimension of the sample space (Weed & Bach, 2017). Entropic regularized optimal transport has more favorable sample complexity of order O(1/ √ N ) for suitable choices of regularization strength (see Genevay et al. 2019 andalso Feydy et al. 2018;Mena & Weed 2019). For this reason, entropic relaxation of the 1-Wasserstein distance is also considered in this work.
Contribution. In this work, we study the efficiency and stability of computing the Wasserstein metric through its dual formulation under different schemes presented in the WGAN literature. We present a detailed discussion on how the different approaches arise and differ from each other qualitatively, and finally measure the differences in performance quantitatively. This is done by quantifying how much the estimates differ from accurately computed ground truth values between subsets of commonly used datasets. Finally, we measure how well the distance is approximated during the training of a generative model. This results in a surprising observation; the method best approximating the Wasserstein distance does not produce the best looking images in the generative setting. | [Summary]
This paper provides an empirical evaluation of commonly used discriminator training strategies in estimating the Wasserstein distance between distributions. The paper finds that methods motivated from optimal transport theory, e.g. c-transform and (c,\eps)-transform, perform better in evaluating the Wasserstein distance than methods commonly used in WGAN practice such as weight clipping and gradient penalty. However, when deployed in WGANs as the discriminator training strategy, these methods do not generate images as high-quality as the gradient penalty method.
[Pros]
The question considered in this paper, i.e. how well does various discriminator strategies (as proxies of the infeasible all 1-Lipschitz discriminator) perform for *evaluating* the Wasserstein distributions, is important for strengthening our understanding of generative models. The main result that methods that are better at computing W may not be better at generating images is interesting, and agrees with the theoretical insight (e.g. Arora et al. 2017) that *non-parametric* minimization of W may not be a good explanation for the generative power of WGANs.
[Cons]
I have concerns about the specific setting of “mini-batch distance” considered in this paper, in that whether it is really a sensible task (and can say anything about computing W) given the curse of dimensionality of estimating W from samples. The paper did not really discuss this issue, and from my own thoughts I don’t think the task avoids this issue.
From my understanding, the “Approximation” experiment does the following:
(1) Set (\mu, \nu) to be a random split of a dataset and consider them to be the populations. Let’s let f_\star denote the (ground truth) optimal discriminator between (\mu, \nu).
(2) Train discriminators (f_wc, f_gp, f_c, f_ceps) (using the different algorithms) from training batches from (\mu, \nu).
(3) Evaluate <f, \mu’_l - \nu’_l> where (\mu’_l, \nu’_l) are fresh test batches from (\mu, \nu) and f is one of the above trained discriminators.
In comparison, the “ground truth” computes W(\mu’_l, \nu’_l) = <f_l, \mu’_l, \nu’_l> from the POT package (though not necessarily through explicitly computing f_l.)
The issue with this is that we expect the method to perform well if f ~= f_l, which can be achievable if all the f_l’s are similar (and hopefully they’re all approximately equal to f_\star.) However I don’t think this is true -- as (\mu’_l, \nu’_l) are samples, and because of the curse of dimensionality, we should expect the f_l’s to be quite different from each other. (Otherwise if they’re really just ~= f_\star, then we can use standard concentration to show the W(\mu’_l, \nu’_l) ~= W(\mu, \nu), which we know is not true from curse of dim.)
Given this, I don’t think the task of comparing <f, \mu’_l, \nu’_l> with the POT results really says anything about their power in computing W. I would be glad though to hear back from the authors to see if my understanding is accurate, and adjust my evaluation from there. |
iclr_2020_BJeTCAEtDB | Feature Map Transform Coding for Energy-Efficient CNN Inference
Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation accompanies the paper. | A lossy transform coding approach was proposed to reduce the memory bandwidth of edge devices deploying CNNs. For this purpose, the proposed method compresses highly correlated feature maps using variable length coding. In the experimental analyses, the proposed method outperforms some of the previous work in terms of the compression ratio and accuracy for training ResNet-34 and MobileNetV2.
The proposed method and initial results are promising. However, the paper and the work should be improved for a clear acceptance:
- Some parts of the method need to be explained more clearly:
– In the statement “Due to the choice of 1 × 1 × C blocks, the PCA transform essentially becomes a 1 × 1 tensor convolution kernel”, what do you mean by “the PCA transform becomes a convolution kernel.”?
- Could you please further explain how you compute PCA using batch data, how you update online and how you employ that in convolution weights together with BN? Please also explain the following in detail:
(I) “To avoid this, we calculate the covariance matrix layer by layer, gradually applying the quantization.” What is the quantization method you applied, and how did you apply it gradually?
(II) “The PCA matrix is calculated after quantization of the weights is performed, and is itself quantized to 8 bits.” How did you quantize the weights, how did you calculate PCA using quantized weights and how did you quantize them to 8 bits?
- Could you please explain the following settings, more precisely: direct quantization of the activations; quantization of PCA coefficients; direct quantization followed by VLC; and full encoder chain comprising PCA, quantization, and VLC? Please note that there are various methods and algorithms which can be used for these quantization steps. Therefore, please explain your proposed or employed quantization methods more clearly and precisely.
– Please clarify the statement “This projection helps to concentrate most of the data in part of the channels, and then have the ability to write less data that layers.”.
- Did you apply your methods to larger networks such as larger ResNets, VGG like architectures, Inception etc?
- I also suggest you to perform experiments on different smaller and larger datasets, such as Cifar 10/100, face recognition datasets etc., to examine generalization of the proposed methods at least among different datasets. |
iclr_2020_rklnDgHtDS | Motivated by the human's ability to continually learn and gain knowledge over time, several research efforts have been pushing the limits of machines to constantly learn while alleviating catastrophic forgetting (Kirkpatrick et al., 2017b). Most of the existing methods have been focusing on continual learning of label prediction tasks, which have fixed input and output sizes. In this paper, we propose a new scenario of continual learning which handles sequence-to-sequence tasks common in language learning. We further propose an approach to use label prediction continual learning algorithm for sequence-to-sequence continual learning by leveraging compositionality (Chomsky, 1957). Experimental results show that the proposed method has significant improvement over state of the art methods. It enables knowledge transfer and prevents catastrophic forgetting, resulting in more than 85% accuracy up to 100 stages, compared with less than 50% accuracy for baselines in instruction learning task. It also shows significant improvement in a machine translation task. This is the first work to combine continual learning and compositionality for language learning, and we hope this work will make machines more helpful in various tasks. | This paper proposes an approach for continual learning for applications in sequence-to-sequence continual learning (S2S-CL). More specifically, the paper addresses the problem of growing vocabulary. The overall approach is intuitive yet quite effective. The method assumes a split between the syntax (f) and semantics (p) of the sequence -- in other words, each token is associated with two labels. Furthermore, the syntax is assumed to be shared across all the training sequences and the sequences that are not encountered during the initial training. A sequence model (LSTM) is used for learning the syntax f over the initial training and is then frozen for all the downstream tasks (i.e. continual learning). The sequence model predicts the correspondence between the output token and input tokens. Another network (a 1-layer MLP) is used to predict the semantic label of the selected token in the target domain. Barring some notational confusion, I believe the method is very reasonable and should work well. They perform experiments on 2 datasets (Instruction learning & machine translation) in 3 continuous learning paradigms with varying difficulties and demonstrate that the proposed method significantly outperforms the baselines by an impressive margin.
I am leaning towards accepting this paper since I believe the proposed approach can be a promising direction for continual learning in S2S settings and the empirical results are convincing.
However, I have some concerns that might require clarification or additional experiments:
1. The novelty of the proposed method is somewhat limited in my opinion. It seems it is a very simple adaptation of Li et al. 19 and the only addition to that work is the usage of a very simple label prediction scheme by fixing everything else. However, this might be okay since the experiments are quite compelling and the method is applied to a different setting.
2. Notations and language in section 3.2-3.4 are hard to follow:
- At the bottom of page 4, I believe some of the equations are not correct. For example, the first term of line 2 should be P(Y^f | X^f, X^p) rather than Y^p?
- In the second equation of page 6, the second term should be v_j = p’ \cdot a_j
- In Figure 1, k_r, E_r, and W are never defined (although their meaning can be inferred).
- In section 3.4, it reads that new word embedding is appended to both semantic and syntactic embedding matrices but this doesn’t make sense because the syntactic network is already fixed so it shouldn’t be able to handle new symbols; therefore, I believe only new row is added to the semantic embedding.
3. I believe that f_predict here is parameterized by \theta and \theta is also frozen during the continual learning phase which contradicts the claim at the end of section 3.2 that only \phi is frozen. Otherwise, it’s hard to see why f_predict does not suffer from catastrophic forgetting. From the provided source code, it seems that it is indeed the case that only the new embedding is being updated. In other words, the only thing happening at this stage is that the newly added embedding are optimized to adapt to the frozen f_predict.
4. Due to point 3, I have some doubts about how scalable this approach is if the other embedding is not allowed to co-adapt. However, perhaps this problem could be alleviated if f_predict has extremely high capacity at initialization? More on this in point 6.
5. Assuming all my assessment above is correct, it seems that the performance of the proposed method should *not* decrease at all so I would like to see more analysis on the decreasing performance in the instruction learning task (Figure 2, left column). Is this be due to the fact that f_predict or the other embedding is not allowed to update and the newly added embedding is mapped close to existing embedding? In that case, can increasing the model capacity solve this problem? I understand the paper makes argument about regularization but I believe this warrants thorough study for gauging the significance of this approach in more realistic settings.
6. I would like to see a discussion of the short-comings of the approach and possible ways to overcome them. For example, freezing the syntactic network seems limiting in machine translation settings if a new language, say Italian, is added. Intuitively, knowing how to translate English-French should help translating English to Italian but fixing the syntax prevents this. Another example is that prior knowledge about the syntax needs to be known about the language for labeling the f and p and this can be expensive and cannot handle words that have more than 1 usages (e.g. run can be used as a noun but also as a verb).
I am willing to increase my score to accept if the revised manuscript can address the majority of, if not all of the concerns listed above.
========================================================================================================
Minor comments that did not affect my decision:
- These sections could greatly benefit from an overall flow chart of how everything fits together
- It says the experiments are repeated with 5 different random seeds so why not add error bars to the plots?
- What characteristics of the 2 tasks cause the baselines to behave so differently? |
iclr_2020_SygLu0VtPH | Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in a world model, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn a model of the world without the need for a specific forward-prediction loss. | Review of “Deep Innovation Protection”
This paper builds on top of the World Models line of work that explores in more detail the role of evolutionary computing (as opposed to latent-modeling and planning direction like in Hafner2018) for such model-based architectures. Risi2019 is the first work that is able to train the millions of weights of the architecture proposed in Ha2018 end-to-end using modular-form of genetic algorithm (GA) to maximize the terminal cumulative reward.
This work expands on Risi2019, and proposes the use of multi-objective optimization (NSGA-II) to allow a world model to learn several subsystems by the fact that the GA optimizer is not only targeting the reward function of the environment, but it is also optimizing for diversity, similar to MAP-Elites line of literature (i.e. Cully2015, Mouret2015, and other more recent cited in this paper).
They demonstrate that end-to-end multi-objective optimization is not only able to solve both CarRacing and VizDoom tasks, but the world models learned have interesting features, i.e. the emerged world models can predict important survival events even though they were not trained explicitly for forward prediction.
While the paper is interesting, I feel in its current state, it is not the level of an ICLR conference paper. Below is some feedback I want to give the authors to help them improve the work, so hopefully it can be improved either for this conference or the next. For the record, the score I wanted to give is a 4, but since I can only choose 3, or 6, I will assign a score of 3 for this review (although should really be a 4). I will breakdown the feedback into minor and major:
Minor feedback:
- Figure 3 does not indicate which task it is solving. Is it CarRacing or VizDoom?
- What's the motivation for optimizing for both "age" and reward? Why not other multi-objectives?
Medium Feedback:
As the authors are learning world models that can predict some form of a future, can they train agents inside an open-loop environment generated by such a world model (perhaps even with some hacks), and transfer such a policy back to the real env, as done in Ha2018 for VizDoom?
Major feedback:
The contribution of this paper isn't there, compared to Risi2019 which I feel has a much larger contribution to the line of work. I'm not convinced that the approach (multi-obj + end-to-end) which solves VizDoom cannot be solved using Risi2019. Authors mentioned this in the intro, but perhaps show some convincing experiments to make sure Risi2019 will definitely fail VizDoom? I'm not really convinced that an end-to-end algorithm will fail to find a solution for a poilcy (with our without a world model component in the architecture) for VizDoom Take Cover. Honestly I don't think VizDoom Take Cover is that hard of a task, and my intuition tells me that the method outlined in Risi2019, if carefully tuned properly, will be able to solve VizDoom Take Cover (but happy to be proven wrong if there is a strong experimental argument suggesting otherwise).
I think for this paper to be convincing, the authors actually need to go beyond the CarRacing and VizDoom tasks and show the real power of multi-objective optimization for solving difficult problems that require model-based architectures. For instance, in Cully2015, the learning of several diverse sub-systems on the Pareto front allowed a robot to still be able to accomplish the walking task when the legs are disabled. I would advise the authors to look for a third task where they can clearly show that their approach can solve problems that current approaches (Ha2018, Risi2019, and possibly even DeepRL) cannot solve. Being able to do this has the potential to make this paper into an important work for the field. I can also recommend trying out some environments in the Animal Olympics (see refs) that will be very much suited for multi-obj optimization for animal-like survival environments where input space are pixels.
[Ha2018] Recurrent World Models Facilitate Policy Evolution (NeurIPS2018)
[Risi2019] Deep neuroevolution of recurrent and discrete world models (GECCO2019)
[Hafner2018] Learning latent dynamics for planning from pixels (PlaNet)
[Cully2015] “Robots that can adapt like animals” https://www.nature.com/articles/nature14422
[Mouret2015] Illuminating search spaces by mapping elites
[Animal Olympics] http://animalaiolympics.com/
*** REVISED SCORE ***
Upon looking at the revision, and the author response, I decided to increase the score of the paper (for the record: originally from a 4, which is rounded to a 4, to a 5, which is rounded to a 6).
The authors explained why Risi2019's approach cannot work on VizDoom with sufficient effort, and I buy their explanation. At the same time I feel this work is can benefit from addition, convincing tasks that highlight the full power of multi objective optimization, for this work to warrant a strong accept. Regardless of the final decision, I'm looking forward to see development of this interesting approach. |
iclr_2020_S1gwC1StwS | We apply the canonical forms of gradient Morse complexes (barcodes) to explore topology of loss surfaces. We present a new algorithm for calculations of the objective function's barcodes of minima. Our experiments confirm two principal observations: (1) the barcodes of minima are located in a small lower part of the range of values of loss function of neural networks and (2) increase of the neural network's depth brings down the minima's barcodes. This has natural implications for the neural network learning and the ability to generalize. | The paper aims to study the topology of loss surfaces of neural networks using tools from algebraic topology. From what I understood, the idea is to effectively (1) take a grid over the parameters of a function (say a parameters of a neural net), (2) evaluate the function at those points, (3) compute sub-levelset persistent homology and (4) study the resulting barcode (for 0/1-dim features) (i.e., the mentioned "canonical form" invariants). Some experiments are presented on extremely simple toy data.
Overall, the paper is very hard to read, as different concepts and terminology appear all over the place without a precise definition (see comments below). Given the problems in the writing of the paper, my assessment is that this idea boils down to computing persistent homology of the sub-levelset filtration of the loss surface sampled at fixed parameter realizations. I do not think that this will be feasible to do, even for small-scale real-world neural networks, simply due to the difficulty of finding a suitable grid, let alone the vast number of function evaluations involved.
The paper is also unclear in many parts. A selection is listed below:
(1) What do you mean by gradient flow? One can define a gradient flow in a linear space X and for a function F: X->R, e.g., as a smooth curve R->X, such that x'(t) = -\nabla F(x(t)); is that what is meant?
(2) What do you mean by "TDA package"? There are many TDA packages these days (maybe the CRAN TDA package?)
(3) "It was tested in dimensions up to 16 ..." What is meant by dimension here? The dimensionality of the parameter space?
(4) The author's talk about the "minima's barcode" - I have no idea what is meant by that either; the barcode is the result of sub-levelset persistent homology of a function -> it's not associated to a minima.
(5) Is Theorem 2.3. not just a restatement of a theorem from Barannikov '94? At least the proof in the appendix seems to be .
(6) Right before Theorem 2.3., what does the notation F_sC_* mean? This needs to be introduced somewhere.
From my perspective, the whole story revolves around how to compute persistence barcodes from the sub-levelset filtration of the loss surface, obtained from function values taken on a grid over the parameters. The paper devotes quite some time to the introduction of these concepts, but not in a very clear or understandable manner. The experiments are effectively done on toy data, which is fine, but the paper stops at that point. I do not buy the argument that "it is possible to apply it [the method] to large-scale modern neural networks". Without a clear strategy to extend this, or at least some preliminary "larger"-scale results, the paper does not meet the ICLR threshold. The more theoretical part is too convoluted and, from my perspective, just a restatement of earlier results. |
iclr_2020_H1eY00VFDB | Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures. | The paper proposes a new loss function which adds to the training objective another term that pulls the current parameters of a neural network further away from the parameters at a previous time step.
Intuitively, this aims to push the current parameters further to the local optimum.
On a variety of benchmarks, optimizing the proposed loss function achieves better results than just optimizing the training loss.
The paper is well written and easy to follow. However, I am not entirely convinced about the intuition of the proposed method and I think further investigation are necessary.
While the method is simple and general, it also seems to be rather heuristic and requires carefully chosen hyperparameters.
Having said that, the empirical evidence shows that the proposed loss function consistently improves performance.
The following details should be addressed further:
- I am a bit confused by the definition of the loss function. In Equation 1 it seems that the term on the left represents the training objective. If that is correct than Equation 2 second case contains the training objective twice?
- F in Section 3 after Equation 2 is not properly defined
- Could it happen that the proposed loss function leads to divergence, for example if the parameter from a previous time step theta^Tp is close to the optimum theta_star?
- What is the motivation to use the L1 norm? How does this choice affect convergence compared to let's L2 norm?
- Section 4.1 typo in first paragraph: K instead of \kappa
- Section 4.1 the results would be more convincing if all networks were trained multiple times with a different random initialization and Table 1 would include the mean and std.
- Why is no warm-up period used for the GAN experiments?
- Section 4.3: why is \kappa increase by 1% for the speech recognition experiments where as by 2% for all other experiments?
- I suggest to increase the line width of all figures since they are somewhat hard to identify on a print version.
- Why is the momentum set to 0.5 for SGD in the ablation study? Most frameworks use a default value of 0.9.
- I would like to see the affect of the warm-up period to the performance in the ablation study.
- How does the choice of learning rate schedule, such as for example cosine annealing, affect the loss function?
post rebuttal
------------------
I thank the authors for clarifying my questions and providing additional experiments. I think that especially the additional ablation studies and reporting the mean and std of multiple trials make the contribution of the paper more convincing. Hence, I increased my score. |
iclr_2020_BJxVI04YvB | We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees-i.e., the confidence set for a given input contains the true label with high probability. We demonstrate how our approach can be used to construct PAC confidence sets on ResNet for ImageNet, and on a dynamics model the half-cheetah reinforcement learning problem. | Summary: This paper presents an approach for generating confidence set predictions from deep networks. That is, the smallest set of predictions where the true answer is included in that set. Theory is used to derive an algorithm with PAC-style bounds on the population risk.
Overall Assessment: I like the core idea of this paper---developing region-based prediction algorithms with theoretical backing by taking advantage of a small calibration dataset and simple models of uncertainty. However, there are significant clarity and evaluation issues that must be addressed before the paper is ready for publication. In current form, I rate the paper between Weak reject and strong reject overall.
Strengths:
+ Nice general direction.
+ New algorithm for confidence set prediction with theoretical guarantees.
+ Experiments show favourable performance vs a few ablations.
Weaknesses & Questions:
1. It is unclear to me why the temperature scaling component is needed. It does not actually change the ordering of the probabilities assigned to each class, so from what I can tell all it does is change the optimal T, but not the final performance.
2. The paper presents a bound on the population risk, but the experiments do not include a comparison of the expected worst-case error rates with the empirical error rates achieved on the test set. This should be corroborated.
3. The description for the Model-based reinforcement learning subsection is impossible to follow due to ambiguity and lack of detail overall.
3.1. Some specific confusions about this section: Paper overloads f to be both a deterministic transition function, and also a distribution over possible states. It also switches between using x_{t+1} / x_t and x^\prime / x to mean the same thing. The notation used to describe the multi-step setting also seems to use "t" for two different things: the current time-step, and some arbitrary time-step in the past.
3.2. It is unclear what metric is being used to evaluate the state transition models---L2 distance between what? Given that the predictions are not point estimates, I would expect something that takes uncertainties into account.
5. For both sets of experiments (classification and RL), there are no alternative methods used as points of reference. There are a multitude of other approaches incorporating uncertainty into predictions, as mentioned in Section 1. A trivial baseline is to heuristically generates confidence sets by trusting the probabilities produced by the model and building a set out of the classes corresponding the top 1-\epsilon probability mass should be essential. This could be improved by applying difference calibration approaches to make the probabilities more trustworthy. Model ensembles provide another easy baseline. Such simple baselines are a minimum expectation, before even getting to state of the art alternatives.
6. There seem to be no vanilla regression experiment, only the harder-to-interpret RL experiment. EG: Since we already have a vision context: If you are doing facial age estimation , or interest point tracking, could one make a PAC prediction about the true age region/interest point location?
7. Overall the paper introduction misses some explanation on the motivating scenarios where such confidence set predictions are useful. One can perhaps imagine this for vision, but some help connecting the dots to how it could be useful in regression and RL would help.
8. It is claimed that Theorem 1 provides a "better" bound than the one based on the VC dimension. What is meant by better?
Minor comments:
* There are a couple of small mistakes in the proof of theorem 1. The \tilde{X} and \tilde{Y} in the definition of \tilde{Z}_{val} are in the wrong place. The "sum of k i.i.d. random variables" should be "sum of n i.i.d. random variables".
* In general, the proof is quite hard to follow. At times it was quite unclear how you get from one step to the next, because it relies on something shown several steps early, which is not referenced.
* Notation, in general, is a bit of an issue in this paper. See comments above, but also: switching between \theta and T for the parameter used in the confidence set predictor, some confusion between T and \hat{T}.
* It is unclear how the neural network used in model-based RL predicts a PSD covariance matrix.
* \hat{T} is not defined when it is first referenced (Section 3.3).
* Algorithm 1 appears several pages before it is referenced.
-------------- POST REBUTTAL ------------
I modify my score to 6: Weak accept. |
iclr_2020_rJxycxHKDS | We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously. | In this paper, the authors proposed to address the information asymmetry between domains in unsupervised domain adaptation. Innovatively, they resort to a multiflow network where each domain adaptatively selects its own pipeline. I quite appreciate the idea itself, while there are many essential issues to be addressed first.
Pros:
- The way tackling the information asymmetry, or untie weights, between domains is novel and interesting.
- The proposed network/framework can be easily extended to the multi-task setting, or multi-source/multi-target domain adaptation.
- The paper is well-written and easy to follow.
Cons:
- The most critical downside of this paper is its insufficient experiments to support the whole idea, where we will detail in the next.
Experimental issues:
- Comparison with other state-of-the-art UDA methods (e.g., CDAN) is a must. This paper improves UDA in terms of adaptive parameters sharing, which is completely independent from most of the UDA contributions (including the DANN you compared) which improve the distribution alignment between feature representations. Therefore, it is imperative to compare that line of SOTA methods, otherwise why should we consider adaptative parameters sharing instead of distribution alignment? At best, the proposed multiflow network combined with the SOTA feature alignment method (e.g., CDAN other than DANN) should be considered and expected to beat CDAN itself.
- Many ablation studies or hyperparameter sensitivity analyses are missing.
o How do you determine the number of parallel flows, i.e., K? Is it possible that 3 or 4, more than 2 flows, are better even in the UDA between two domains?
o Do you try any other possibilities of grouping a computational unit, and how will different configurations influence the performance?
o Is there a possibility that none of the gates in the final layer is activated? Do you need some constraints?
- Since the authors mentioned the potential of the multi-flow network in adaptation between multiple domains, it is necessary to investigate multi-source or multi-target domain adaptation. Only in this case may the significance of different K values be demonstrated.
- The baseline results in Table 1 are not comparable to some reported papers, and even lower than those reported in other UDA papers. |
iclr_2020_Bkl7bREtDr | In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from natural language processing and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters. | # UPDATE after rebuttal
I have changed my score to 8 to reflect the clarifications in the new draft.
Summary:
This paper presents a family of architectural variants for the recurrent part of an RL controller. The primary claim is that simple components which compute elementwise sum/average/max over the activations seen over time are highly robust to noisy observations (as encountered with many RL environments), as detailed with various empirical and theoretical analyses. By themselves these aggregators are incapable of storing order dependent information, but by combining an LSTM with an aggregator, and pushing half the LSTM activations through the aggregator, and concatenating the the other half of the activations with the aggregator output, the resulting output contains order dependent and order independent content.
The motivation is very clear (many of the most challenging modern video games used as RL environments clearly have noisy observations, and many timesteps for which no new useful information is observed) and the related work is comprehensive. An increase in training speed and stability, apparently without any major caveats, would be of great interest to any practitioner of Deep Reinforcement Learning.
The experiments provided are good, and vary nicely between actual RL runs and theoretical analysis, all of which convinces me that this could well become a standard Deep RL component. I do have a range of questions & requests for clarification (see below) but I believe the experiments as presented, plus some additions before camera ready, will make for a good paper of wide interest.
Desicion: Accept. I would give this a 7 if I was able to. The idea is very simple (indeed I find it slightly hard to believe no one has tried this before, but I don't know of any references myself) and the results, particularly Figure 4, are compelling. It's nice to see a very approachable more mathematical analysis as well, it would be good to see more papers proposing new neural components with this kind of rigour. I look forward to trying this approach myself, post publication.
Discussion:
AMRL-Avg coming out the best in Figure 8 makes a lot of sense to me, as I can see how average provides a stable signal of unordered information. One thing that really doesn't make sense to me is why Max and Sum would also be good - obviously their SNR / Jacobians are quite similar, but fundamentally there is a risk of huge values being built up in activations (especially with Sum), which at least have the potential to cause numerical instability, provide weird gradient magnitudes (which then mess up moment tracking optimizers like Adam, etc). Thre does not seem to be any mention of numerical stability, or whether any considerations ned to be taken for this. Maybe we could hope that 'well behaved' internal neron activations are zero mean, so the average aggregator will never get too big - but is this always the case, at all points in training, in every episode? I appreciate the straight through estimator might ameliorate this, but it is not made entirely clear to me in the text that this is the reason for using it. Addressing this point would increase the strength of the argument.
Given that DNC was determined to be the strongest baseline in a few of the metrics, and that AMRL combines these aggregators with LSTM, an experiment that I'm surprised is missing would be to make AMRL containing a DNC instead of an LSTM. Is there a reason why this wasn't attempted?
It would have been good to see a wider set of RL experiments - the Atari suite is well studied, easily available, and there are many open source implementations to which AMRL could easily be slotted into (eg IMPALA, openai baselines, etc).
The action space for Minecraft is not spelled out clearly - at first I assumed this was the recent project Malmo release, which I assume to have continuous actions (or at least, the ability to move in directions that are not compass points), but while Malmo is mentioned in the paper, the appendix implies that the action space is (north, east, west) etc. I'm aware that there is precedent in the literature (Oh et al 2016) for 'gridworld minecraft', but I think it would improve the paper to at least acknowledge this in the main text, as I feel even most researchers in 2019 would read the text and assume the game to be analogous to VizDoom / DeepMind Lab, when it really isn't. Note that most likely the proposed method would give an even bigger boost with "real" minecraft, as there are even more non-informative observations between the interesting ones, and furthermore I think the environment choice made in this paper is fine, as it still demonstrates the point. A single additional sentence should suffice.
Minor points:
NTM / DNC were not "designed with RL in mind" per se, the original NTM paper had no RL and the DNC paper contained both supervised and RL results. Potentially the statement at the bottom of page 1 was supposed to refer mainly to stacked LSTMs - either way I feel it would be better to slightly soften the statement to "...but also for stacked LSTMs and DNCs (cite), which have been widely applied in RL."
In the RL problem definition, the observation function and the state transition function are stated as (presumably?) probability distributions with the range [0,1], but for continuous state / observation spaces it is entirely possible for the probability density at a point to exceed 1.
Also in the RL section - the notation of $\tau_t \in \Omega^t$ should probably also contain the sequence of $t-1$ actions taken throughout the trajectory - this is made clear in the text, but not in the set notation.
The square bracket "slicing" notation used for slicing and concatenating is neat, but even though I spend all day writing python it still took me a while to realise what this meant, as I haven't seen this used in typeset maths before. Introducing this notation (as well as the pipe for concatenation) at first usage would help avoid tripping up readers unnecessarily.
Note that this pdf caused several office printers I tried it on to choke on later pages, and I get slow scrolling in Chrome on figures such as figure 11 - possibly the graphics are too high resolution (?)
typos: A5, feed forward section: "later" should be "layer" I believe. |
iclr_2020_HJxV5yHYwB | There ubiquitously exist many single-objective tasks in the real world that are inevitably related to some other objectives and influenced by them. We call such task as the objective-constrained task, which is inherently a multi-objective problem. Due to the conflict among different objectives, a trade-off is needed. A common compromise is to design a scalar reward function through clarifying the relationship among these objectives using the prior knowledge of experts. However, reward engineering is extremely cumbersome. This will result in behaviors that optimize our reward function without actually satisfying our preferences. In this paper, we explicitly cast the objective-constrained task as preference multiobjective reinforcement learning, with the overall goal of finding a Pareto optimal policy. Combined with Trajectory Preference Domination we propose, a weight vector that reflects the agent's preference for each objective can be learned. We analyzed the feasibility of our algorithm in theory, and further proved in experiments its better performance compared to those that design the reward function by experts. | After Responses:
I understand the differences that authors pointed to the relevant literature. However, it is still lacking comparisons to these relevant methods. The proposed method has not been compared with any of the existing literature. Hence, we do not have any idea how does it stand against the existing approaches. Hence, I believe the empirical study is still significantly lacking. I will stick to my decision. Main reason is as follows; I believe the idea is interesting but it needs a significant empirical work to be published. I recommend authors to improve empirical study and re-submit.
-------
The submission is proposing a method for multi-objective RL such that the preference of tasks learned on the fly with the policy learning. The main idea is converting the multi-objective problem into single objective by scalar weighting. The weights are learned in a structured learning fashion by enforcing them to approximate the Pareto dominance relations.
The submission is interesting; however, its novelty is not even clear since authors did not discuss majority of the existing related work.
Authors can consult the AAMAS 2018 tutorial "Multi-Objective Planning and Reinforcement Learning" by Whiteson&Roijers for relevant papers. It is also important to note that there are other methods which learn weighting. Optimistic linear support is one of such methods. Hence, this is not the first of such approaches. Beyond RL, it is also studied extensively in supervised learning. For example, authors can see "Multi-Task Learning as Multi-Objective Optimization" from NeurIPS 2018.
The manuscript is also very hard to parse and understand. For example, Definition 2 uses but not define "p" in condition (2). Similarly, Lemma 1 states sth is "far greater" than something else. However, "far greater" is not really defined. I am also puzzled to understand the relevance of Theorem 1. It is beyond the scope of the manuscript, and also not really new.
Authors suggest a method to solve multi-objective optimization. However, there is no correctness proof. We do not know would the algorithm result in Pareto optimal solution even asymptotically. Arbitrary weights do not result in Pareto optimality.
Proposing a new toy problem is well-received. However, not providing any experiment beyond the proposed problem is problematic. Authors motivate their method using DOOM example. Why not provide experimental results on a challenging problem like DOOM?
In summary, I definitely appreciate the idea. However, it needs better literature search. Authors should position their paper properly with respect to existing literature. The theory should be revised and extended with convergence to Pareto optimality. Finally, more extensive experiments on existing problems comparing with existing baselines is needed. |
iclr_2020_SyglyANFDr | Distributionally Robust Optimization (DRO) has been proposed as an alternative to Empirical Risk Minimization (ERM) in order to account for potential biases in the training data distribution. However, its use in deep learning has been severely restricted due to the relative inefficiency of the optimizers available for DRO in comparison to the wide-spread Stochastic Gradient Descent (SGD) based optimizers for deep learning with ERM. In this work, we propose SGD with hardness weighted sampling, a principled and efficient optimization method for DRO in machine learning that is particularly suited in the context of deep learning. Similar to an online hard example mining strategy in essence and in practice, the proposed algorithm is straightforward to implement and computationally as efficient as SGD-based optimizers used for deep learning. It only requires adding a softmax layer and maintaining a history of the loss values for each training example to compute adaptive sampling probabilities. In contrast to typical ad hoc hard mining approaches, and exploiting recent theoretical results in deep learning optimization, we prove the convergence of our DRO algorithm for over-parameterized deep learning networks with ReLU activation and finite number of layers and parameters. Preliminary results demonstrate the feasibility and usefulness of our approach. | This paper proposes a method for Distributionally Robust Optimization (DRO). DRO has recently been proposed (Namkoong and Duchi 2016 and others) as a robust learning framework compared to Empirical Risk Minimization (ERM). My analysis of this work in short is that the problem that this paper addresses is interesting and not yet solved. This paper proposes a simple and efficient method for DRO. However, convergence guarantees are weak which is reflected in their weak experimental results. And for a 10-page paper, the writing has to improve quite a lot.
Summary of contributions:
The phi-divergence variation of DRO introduces a min-max objective that minimizes a maximally reweighted empirical risk. The main contribution of this work is as follows:
- Show that for two common phi-divergences (KL-divergence and Pearson-chi2 divergence), the inner maximization has a simple solution for the weights that depends on the value of the loss on the training set.
- Use the above algorithm and modify SGD by changing the sampling of data according to the weights.
- Convergence proofs for the above algorithm for wide networks (not necessarily infinite-width).
Cons:
- Fig 1, ERM should not be so bad as if predicting randomly. This suggests a problem in the experimental setting. At least more ablation study is needed, e.g. try varying the percentage of data kept and show the final test accuracy as a function of this percentage for ERM. In the worst case, the accuracy for ERM should be ~90% on 9 classes and zero on 10th which is ~80% for uniform test accuracy. Another point, what is the train accuracy on cats? ERM should be overfitting to those few cats in the training set and achieve some non-zero accuracy at test. But it seems the test accuracy is almost exactly 0 at test time.
- In Theorem 5.1 that provides convergence proofs, the learning rate eta_exact has a dependence on 1/beta, which means much smaller learning rates should be used for the proposed method. However, in the experiments it seems that the same learning rate is used for all methods. This seems to trouble the convergence very clearly as the curves start to fluctuate considerably by increasing for larger betas. There is no bounds on beta which in practice can force us to use orders of magnitude smaller learning rates.
- Section 4.3 aims at linking hard negative mining to the proposed method. What they actually do is propose a new definition for hard-negative mining which is satisfied by the proposed method. There is little basis for suggesting this definition. There are myriads of work on hard-negative mining and suggesting a new definition needs a more thorough study of related works.
- Section A.1.1 argues we would stop ERM when it plateaus, but it doesn't look like it has plateaued in fig 2 left.
- Section A.3, I'm not sure lr=1 would be stable wide resnet. Maybe there is a problem with the experimental setting.
- There is such much non-crucial details in the main body that increased the main text to 10 pages. At least 2 pages could be saved by moving details of theorems and convergence results to the appendix.
After rebuttal:
I'm keeping more score. The manuscript has been improved quite a lot. My main concern remains the experiments. Both texts and mathematical statements still need more edits.
The authors have completely removed experiments on cifar10. Paper now has only one set of experiments on MNIST with little ablation study. I still think my suggested ablation study is interesting. As a minimal sanity check, I'd like to see the performance of DRO on the original MNIST dataset. Basically, how much do we lose by using a robust method on a balanced dataset? Even with full ablation studies, I don't think MNIST is not enough for testing methods based on hard-negative mining. |
iclr_2020_r1xpF0VYDS | We present an efficient quantum algorithm aiming to find the negative curvature direction for escaping the saddle point, which is a critical subroutine for many second-order non-convex optimization algorithms. We prove that our algorithm could produce the target state corresponding to the negative curvature direction with query complexityÕ(polylog(d)! −1 ), where d is the dimension of the optimization function. The quantum negative curvature finding algorithm is exponentially faster than any known classical method which takes time at least O(d! −1/2 ). Moreover, we propose an efficient algorithm to achieve the classical read-out of the target state. Our classical read-out algorithm runs exponentially faster on the degree of d than existing counterparts. | I am not familiar with the quantum algorithm literature, hence I cannot judge the novelty of the proposed algorithm. Basically, this is a quantum algorithm for computing the smallest negative eigenvalue (and the associated eigenvector) of square matrices. It is worth noting that the algorithm is built on top of the existing SVE model. The authors don't compare this method with any existing work in quantum singular value transformation and decomposition. A throughout discussion and comparison should be made in Section 1.1
Despite the title, I can't find a strong connection between the proposed algorithm and machine learning. Given a loss function f and the parameters x, the paper doesn't specify how to construct the quantum state in the step 2 of Algorithm 2. Is there a cheap way to construct the quantum state from (f, x)? If not, and if one has to compute the Hessian matrix H from (f, x), then the complexity of this pre-processing step will be on the order of O(d^2), which already makes the quantum efficiency meaningless.
Escaping saddle point is usually an iterative process, because the curvature varies across from place to place. The paper focuses on an individual step of a single iteration (i.e. finding the negative curvature), but it doesn't mention how would the iterative process look like.
Finally, although the quantum complexity is poly-log in dimension d, it has a terrible dependence on the rank r and the Frobenius norm of the Hessian matrix. The potential issues of such dependencies should be discussed. |
iclr_2020_Byg1v1HKDB | Abductive reasoning is inference to the most plausible explanation. For example, if Jenny finds her house in a mess when she returns from work, and remembers that she left a window open, she can hypothesize that a thief broke into her house and caused the mess, as the most plausible explanation. While abduction has long been considered to be at the core of how people interpret and read between the lines in natural language (Hobbs et al., 1988), there has been relatively little research in support of abductive natural language inference and generation. We present the first study that investigates the viability of language-based abductive reasoning. We introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations. Based on this dataset, we conceptualize two new tasks -(i) Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and (ii) Abductive NLG: a conditional generation task for explaining given observations in natural language. On Abductive NLI, the best model achieves 68.9% accuracy, well below human performance of 91.4%. On Abductive NLG, the current best language generators struggle even more, as they lack reasoning capabilities that are trivial for humans. Our analysis leads to new insights into the types of reasoning that deep pre-trained language models fail to perform-despite their strong performance on the related but more narrowly defined task of entailment NLI-pointing to interesting avenues for future research. | This paper introduces two new natural language tasks in the area of commonsense reasoning: natural language abductive inference and natural language abductive generation. The paper also introduces a new dataset, ART, to support training and evaluating models for the introduced tasks. The paper describes the new language abductive tasks, contrasting it to the related, and recently established, natural language inference (entailment) task. They go on to describe the construction of baseline models for these tasks. These models were primarily constructed to diagnose potential unwanted biases in the dataset (e.g., are the tasks partially solved by looking at parts of the input, do existing NLI models far well on the dataset, etc.), demonstrating a significant gap with respect to human performance.
The paper, and the dataset specifically, represent an important contribution to the area of natural language commonsense reasoning. It convincingly demonstrates that the proposed tasks, while highly related to natural language entailment, are not trivially addressed by existing state-of-the-art models. I expect that teaching models to perform well on this task can lead to improvements in other tasks, although empirical evidence of this hypothesis is currently absent from the paper.
Below are a set of more specific observations about the paper. Some of these comments aim to improve the presentation or content of the paper.
1. In Section “5.1 - Pre-trained Language Models” and attendant Table 1describe results of different baselines on the ART inference task. The results in the table confused me for quite some time, I’d appreciate some clarifications. With respect to the differences with columns 1 (GPT AF) and 2 (ART, also BERT AF) I would like the comparison to be made more clear. As far as I understand it, there are 2 parts of the dataset that can be varied: (1) the train+dev sets and (2) the test set. Furthermore, it seems that it makes sense to vary each of these at a time, if we are to compare results with variants. For example: we can fix the test set, and vary how we generate training and dev examples. If a model does better with the same test set, we can assume the train+dev examples were better for the model (for whatever reasons, closer distribution to test, harder or more
informative training examples, etc). We can also keep the train+set constant, and vary the test set. This allows us to evaluate which test set is harder with respect to the training examples. The caption of Table 1 implies that both columns are evaluations based on the “ART” test set. If that is correct, then the train+dev set generated from the GPT adversarial examples is of better quality, generating a BERT-ft (fully connected) model that is 3% better. But the overall analysis seems to indicate that this is not what was done in the experiment. Rather, it seems that *both* the train+dev _and_ the test sets were modified concurrently. If that is the case, I would emphasize that the text needs to make this distinction clear. Furthermore, I would say that varying both train and test sets concurrently is sub-optimal, and makes it a bit harder to draw the conclusion that BERT adversarial filtering leads to a stronger overall dataset.
2. Along the lines of the argument in (1), above, I would urge the authors to publish the *entire* set of generated hypotheses (plausible and implausible) instead of only releasing the adversarially filtered pairs. Our group’s experience with training inference models is that it is often beneficial to train using “easy” examples, not only hard examples. I suspect the adversarially filtered set will focus on hard examples only. While this is fine to do in the test set, I think if the full set of annotated/generated hypotheses are released, model designers can experiment with combining pairs of hypothesis in different ways.
3. Furthering the argument of (2): in certain few-shot classification tasks, one is typically asked to identify similarity between one test example and different class representatives. Experience shows that it is often beneficial to train the model on a larger number of distractor classes than what the model is eventually evaluated on (e.g., https://papers.nips.cc/paper/6996-prototypical-networks-for-few-shot-learning). In the alpha-NLI setting, have you experimented with training using multiple distractors, instead of only 1, during training (even if you end up evaluating over 2 hypotheses)?
4. One potential argument for introducing a new natural language task is of transfer learning: learning to perform a complex natural language task should lead to better natural language models more generally, or, for some other related tasks. This paper does not really touch on this aspect of the work. But, potentially, one investigation that could be conducted is through a reversal of the paper’s existing NLI entailment experiment. The paper shows that NLI entailment models do not perform well on alpha-NLI. But it would be interesting to see if a model trained on alpha-NLI, and fine-tuned or multi-tasked on NLI entailment, does better on NLI entailment (i.e., is there transfer from alpha-NLI to entailment NLI?).
5. Another option is to evaluate whether alpha-NLI helps with other commonsense tasks. One other example is Winograd Schema Challenge, which current systems also perform well below human performance. It also seems that the Winograd Schema Challenge questions are not too far from abductive inference.
6. In the abstract of the paper, before the paper defines the abductive inference/generation task specifically, the claim that abductive reasoning has had much attention in research seemed awkward. Informally, most commonsense reasoning (including NLI entailment) could be cast as abductive reasoning.
7. In at least one occasion, I found an acronym which was hard to find the definition for (“AF” used in Section “5.1 - Pre-trained Language Models”; I assumed it was “adversarial filtering”.)
8. In Section “5.1 - Pre-trained Language Models” it seems that the text quotes an accuracy for BERT-ft (fully connected) of 68.9%, but Table 1 indicates 69.6%.
9. In Section “5.1 - Learning Curve and Dataset Size”, there is a claim that the performance of the model plateaus at ~10,000 instances. This does not seem supported by Figure 5. There appears to be over 5-7% accuracy (absolute) improvements from 10k to 100k examples. Maybe the graph needs to be enhanced for legibility?
10. It is great that the paper includes human numbers for both tasks, including all the metrics for generation.
11. Period missing in footnote 8.
12. The analysis section is interesting, it is useful to have in the paper. However, Table 3 is a bit disappointing in that ~26% of the sampled examples fit into one of the categories. It would be great if the authors could comment on the remaining ~74% of the sampled dataset. |
iclr_2020_Sygg3JHtwB | This paper proposes a new approach for step size adaptation in gradient methods. The proposed method called step size optimization (SSO) formulates the step size adaptation as an optimization problem which minimizes the loss function with respect to the step size for the given model parameters and gradients. Then, the step size is optimized based on alternating direction method of multipliers (ADMM). SSO does not require the second-order information or any probabilistic models for adapting the step size, so it is efficient and easy to implement. Furthermore, we also introduce stochastic SSO for stochastic learning environments. In the experiments, we integrated SSO to vanilla SGD and Adam, and they outperformed state-of-the-art adaptive gradient methods including RMSProp, Adam, L 4 -Adam, and AdaBound on extensive benchmark datasets. | This paper proposes a new step size adaptation in first-order gradient methods. The proposed method establishes a new optimization problem with the first-order expansion of loss function and the regularization, where the step size is treated as a variable. ADMM is adopted to solve the optimization problem.
This paper should be rejected because (1) the proposed method does not show the convergence rate improvement of the gradient method with other step sizes adaptation methods. (2) the linearization of the objective function leads the step size to be small ($0<\eta<\epsilon$), which could slow down the convergence in some cases. (3) the experiments generally do not support a significant contribution. In table 1, the results of the competitor are not with the optimal step sizes. The limit grid search range could not verify the empirical superiority of the proposed method.
Minor comments:
The y-axis label of (a) panel in each figure is wrong. I guess it should be "Training loss ". |
iclr_2020_HygrdpVKvr | Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most algorithms are tested on the same datasets, there is no shared experimental protocol followed by all. As such, and due to the under-use of ablation studies, there is a lack of clarity regarding why certain methods are more effective than others. Our first contribution is a benchmark of 8 NAS methods on 5 datasets. To overcome the hurdle of comparing methods with different search spaces, we propose using a method's relative improvement over the randomly sampled average architecture, which effectively removes advantages arising from expertly engineered search spaces or training protocols. Surprisingly, we find that many NAS techniques struggle to significantly beat the average architecture baseline. We perform further experiments with the commonly used DARTS search space in order to understand the contribution of each component in the NAS pipeline. These experiments highlight that: (i) the use of tricks in the evaluation protocol has a predominant impact on the reported performance of architectures; (ii) the cell-based search space has a very narrow accuracy range, such that the seed has a considerable impact on architecture rankings; (iii) the hand-designed macrostructure (cells) is more important than the searched micro-structure (operations); and (iv) the depth-gap is a real phenomenon, evidenced by the change in rankings between 8 and 20 cell architectures. To conclude, we suggest best practices, that we hope will prove useful for the community and help mitigate current NAS pitfalls, e.g. difficulties in reproducibility and comparison of search methods. We provide the code used for our experiments at link-to-come. | The paper scrutinizes commonly used evaluation strategies for neural architecture search.
The first contribution is to compare architectures found by 5 different search strategies from the literature against randomly sampled architectures from the same underlying search space.
The paper shows that across different image classification datasets, the most neural architecture search methods are not consistently finding solutions that achieve a better performance than random architectures.
The second major contribution of the paper is to show that the state-of-the-art performance achieved by the many neural architecture search methods can be largely attributed to advanced regularization tricks.
In general the paper is well written and easy to follow.
The paper sheds a rather grim light on the current state of neural architecture search, but I think it could raise awareness of common pitfalls and help to make future work more rigorous.
While poorly designed search spaces is maybe a problem that many people in the community are aware of, this is, to the best of my knowledge, the first paper that systematically shows that for several commonly used search spaces there is not much to be optimized.
Besides that, the paper shows that, maybe not surprisingly, the training protocol seems to be more important than the actual architecture search, which I also found rather worrisome.
It would be nice, if Figure 3 could include another bar that shows the performance of a manually designed architecture, for example a residual network, trained with the same regularization techniques.
A caveat of the paper is that mostly local methods with weight-sharing are considered, instead of more global methods, such as evolutionary algorithms or Bayesian optimization, which also showed strong performance on neural architecture search problems in the past.
Furthermore, the paper doesn't mention some recent work in neural architecture search that present more thoroughly designed search spaces, e.g Ying et al.
It would also be helpful if the paper could elaborate, on how better search spaces could be designed.
Nas-bench-101: Towards reproducible neural architecture search
C Ying, A Klein, E Real, E Christiansen, K Murphy, F Hutter
ICML 2019
Minor comments:
- Section 4.1 How are the hyperparameters of drop path probability, cutout length and auxiliary tower weight chosen?
post rebuttal
------------------
I thank the authors for taking the time to answer my questions and performing the additional experiments. The paper highlights two important issues in the current NAS literature: evaluating methods with different search spaces and non-standardised training protocols. I do think that the paper makes an important contribution which hopefully helps future work to over come these flaws. |
iclr_2020_HkxzNpNtDS | Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. In order to close the gap between seen and unseen environments, we aim at learning a generalizable navigation model from two novel perspectives: (1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; (2) we propose to learn environment-agnostic representations for navigation policy that are invariant among environments, thus generalizing better on unseen environments. Extensive experiments show that our environment-agnostic multitask navigation model significantly reduces the performance gap between seen and unseen environments and outperforms the baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH, establishing the new state of the art for the NDH task. | This paper addresses some challenges of following natural language instructions for navigating in visual environments. The main challenge in such tasks is the scarcity of available training data, which results in generalization problems where the agent has difficulty navigating in unseen environments. Therefore, the authors propose two key ideas to tackle this issue and incorporate them in the reinforced cross-modal matching (RCM) model (Wang et al, 2019). First, they use a generalized multitask learning model to transfer knowledge across two different tasks: Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH). This results in learning features that explain both tasks simultaneously and hence generalize better. Moreover, by training on both tasks the effective size of training data is increased significantly. Second, they propose an environment-agnostic learning technique in order to learn invariant representations that are still efficient for navigation. This prevents overfitting to specific visual features of the environment and therefore helps improving generalization. The contribution of this paper is combining and incorporating these two key ideas in the RCM framework and verifying it on VLN and NDH tasks. This approach is novel compared to prior results in tackling the VLN task. Their experimental results show that the two proposed techniques improve generalization in a complementary fashion, measured by decreased performance gap between seen and unseen environments. They demonstrate that their technique outperforms state-of-the-art methods on unseen data on some evaluation metrics.
Even though the two key ideas proposed in the paper has been explored in the literature in other contexts, this paper contributes to natural language guided navigation by incorporating these ideas in a unified framework and demonstrates promising results on two datasets. Therefore, I would recommend accepting this paper if some issues in delivery and clarity are addressed.
In particular, Section 3, where the authors introduce the novelty of the paper in more detail, could be better explained and a cleaner line should be drawn between prior results from other papers and novel results proposed by this paper. In general, the new ideas would require more emphasis, since they are somewhat lost between adaptations from prior work. Most importantly, Eq. (3) is stated without sufficient motivation and would require a more detailed explanation.
In addition to these points, I would like to disclose some recommendations that might improve the paper but are not strictly part of my decision. In some cases the notation is not clear and some variables are not defined or explained. For instance, after Eq. (8) Attention(.) is used without citation or definition, in Eq. (9) Wc and Wu are not defined and some of the notation in Eq. (6)-(7) are not defined. Moreover, reading the decrease in performance gap from the presented table format is inconvenient and a better visual representation might help demonstrating the improvement in generalization better. Lastly, I noticed that the navigation error for shared decoder is slightly higher than for separate encoders in Table 3, even though it outperforms the separate encoders in every other measures. Is there a particular explanation for this? |
iclr_2020_r1gPoCEKvH | We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method (Bender et al., 2018), however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet. | This paper presents a new one-shot NAS approach. Parameter updating and structure updating are optimized separately. In the process of parameter optimization, different from the previous methods, the network samples the structure according to the uniform probability distribution. This sampling method can avoid the coupling caused by optimizing the structural parameters at the same time. After training the supernet, the network uses the genetic algorithm to search the structure.
Pros:
+ This paper is well-written and easy to follow.
+ The method proposed in this paper is very efficient in the search stage and saves memory. During the searching stage, the constraints of the model can be restricted by the genetic algorithm.
+ This method could directly search architectures on the large scale dataset.
+ The results are promising. Experiments are performed on various structures and including quantization layers.
Still, I have several minor concerns regarding the algorithm and experiments.
1. Why does the author think that the supernet needs to be trained by single path all the time? In the architecture searching space, under the same computational constraints, some models perform well, and some do not. Why do the good models and the worse models use the same probability to sample? Wouldn't it require much more time to optimize the supernet? In the experiment part, the author compares a related work FBNet, which uses the structure parameter \theta and temperature parameter \tau to control the sampling. When the temperature \tau is high, the distribution of the samples degenerates into the single path sampler, and the accuracy is used to optimize the parameter \theta, which makes \theta tend to select the well-performed models. With the decrease of the temperature \tau, the probability of sampling well-performed architectures is higher than the worse-performed models. Therefore, what is the advantage of the single path sampling method over \tau controlled sampler? It seems that gradually pruning the worse-performed search space would accelerate searching time. More analysis is preferred rather than the superior results from the experiments.
2. Is identity in the search space? For at the last of Page4, the author said 'our choice block does not have an identity branch', and the listed architecture on page 13 has the identity.
3. Is the supernet training for 120 epochs necessary? Is the rank of network structures become steady when training for 120 epochs. Because for the subsequent EA-based network structure search, the loss scale is not important, but the sorting (ranking) is important. (This beyond the scope of this paper, but relates to the NAS area)
4. What is the number of parameters for all the listed models in Tab. 3 & Tab. 4? Listing the parameters makes it easier for the followers.
5. What kind of structure do the mixed-precision networks use? The original resnet module, or the bireal resnet module [r1]?
6. While recalculating the statistics of all the BN, is backpropagation required or just run the inference without gradient backpropagation.
7. The author uses the approximate complete set of Imagenet to train supernet. If we directly inherit the parameter from the supernet, can we accelerate the training of searched good structure? Will Top-1 acc lower than training from scratch?
[r1] Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm |
iclr_2020_SylzhkBtDB | We investigate multi-task learning approaches which use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtain a 2.35% GLUE score average improvement on 5 GLUE tasks over BERT LARGE using our alignment method. We also design an SVD-based task reweighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset. | This paper analyzed the principles for a successful transfer in the hard-parameter sharing multitask learning model. They analyzed three key factors of multi-task learning on linear model and relu linear model: model capacity (output dimension after common transformation), task covariance (similarity between tasks) and optimization strategy (influence of re-weighting algorithm), with theoretical guarantees. Finally they evaluated their assumptions on the state-of-the-art multi-task framework (e.g GLUE,CheXNet), showing the benefits of the proposed algorithm.
Main comments:
This paper is highly interesting and strong. The author systematically analyzed the factors to ensure a good multi-task learning. The discovering is coherent with with previous works, and it also brings new theoretical insights (e.g. sufficient conditions to induce a positive transfer in Theorem 2). The proof is non-trivial and seems technically sound.
Moreover, they validated their theoretical assumptions on the large scale and diverse datasets (e.g NLP tasks, medical tasks) with state-of-the-art baselines, which verified the correctness of the theory and indicated strong practical implications.
Minor comments:
The main message of the paper is clear but some parts still confuse me:
1. I suggest the author to merge the Figure 3 and Data generation (Page 4) part for a better presentation. e,g which “diff.covariance” is task 3 or 4 ? And why we use different rotation matrix Q_i ?
2. In algorithm 1 (Page 5) , I suggest the author use a formal equation (like algorithm 2) instead of descriptive words.
-- Step 2, I have trouble in understading this step.
-- Step 3, how to jointly minimize R_1,\dots, R_k, A_1, \dots, A_k ? we use loss (3) or other losses ?
-- I suggest that the author release the code for a better understanding.
3. For theorem 2, can we find some “optimal” c to optimize the right part ? Since 6c + \frac{1}{1-3c}\frac{\epsilon}{\X_2\theta_2} might be further optimized
4. In section 3.3. (Figure 6) of the real neural network, the model capacity is the dimension of Z or simply the dimension before last fc-layer ?
5. Some parts in the appendix can be better illustrated:
(a) I am not clear how proposition 4 can derive proposition 1.
(b) Page 15, proving fact 8: last line \frac{1}{k^4}sin(a^{prime},b^{prime}) should be \frac{1}{k^4}sin^{2}(a^{prime},b^{prime}).
Overall I think it is a good work with interesting discoverings for the multi-task learning. I think it will potentially inspire the community to have more thoughts about the transfer learning. |
iclr_2020_B1e-kxSKDH | When humans observe a physical system, they can easily locate objects, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control in a task with heavily interacting objects. | This paper introduces a structured deep generative model for video frame prediction, with an object recognition model based on the Attend, Infer, Repeat (AIR) model by Eslami et al. (2016) and a graph neural network as a latent dynamics model. The model is evaluated on two synthetic physics simulation datasets (N-body gravitational systems and bouncing billiard balls) for next frame prediction and on a control task in the billiard domain. The model can produce accurate predictions for several time steps into the future and beats a variational RNN and SQAIR (sequential AIR variant) baseline, and is more sample-efficient than a model-free PPO agent in the control task.
Overall, the paper is well-structured, nicely written and addresses an interesting and challenging problem. The experiments use simple domains/problems, but give good insights into how the model performs.
Related work is covered to a satisfactory degree, but a discussion of some of the following closely related papers could improve the paper:
* Chang et al., A Compositional Object-Based Approach To Learning Physical Dynamics, ICLR 2017
* Greff et al., Neural Expectation Maximization, NeurIPS 2017
* Kipf et al., Neural Relational Inference for Interacting Systems, ICML 2018
* Greff et al., Multi-object representation learning with iterative variational inference, ICML 2019
* Sun et al., Actor-centric relation network, ECCV 2018
* Sun et al., Relational Action Forecasting, CVPR 2019
* Wang et al., NerveNet: Learning structured policy with graph neural networks, ICLR 2018
* Xu et al., Unsupervised discovery of parts, structure and dynamics, ICLR 2019
* Erhardt et al., Unsupervised intuitive physics from visual observations, ACCV 2018
In terms of clarity, the paper could be improved by making the used model architecture more explicit, e.g., by adding a model figure, and by providing an introduction to the SuPAIR model (Stelzner et al., 2019) — the authors assume that the reader is more or less familiar with this particular model. It is further unclear how exactly the input data is provided to the model; Figure 2 makes it seem that inputs are colored frames, section 3.1 mentions that inputs are grayscale videos (do all objects have the same appearance or different shades of gray?), which is in conflict with the statement on page 5 that the model is provided with mean values of input color channels. Please clarify.
In terms of novelty, the proposed modification of SQAIR (separating object detection and latent dynamics prediction) is novel and likely leads to a speed-up in training and evaluation. Using a Graph Neural Network for modeling latent physics is reasonable and has been shown to work on related problems before (see referenced work above and related work mentioned in the paper). Similarly, using such a model for planning/control is interesting and adds to the value of the paper, but has in related settings been explored before (e.g. Wang et al. (ICLR 2018) and Sanchez-Gonzalez (ICML 2018)).
Experimentally, it would be good to provide ablation studies (e.g. a different object detection module like AIR instead of SuPAIR, not splitting the latent variables into position, velocity, size etc.) and run-time comparisons (wall-clock time), as one of the main contributions of the paper is that the proposed model is claimed to be faster than SQAIR. The overall model predictions are (to my surprise) somewhat inaccurate, when looking at e.g. the billiard ball example in Figure 2. In Steenkiste et al. (ICLR 2018), roll-outs appear to be more accurate. Maybe a quantitative experimental comparison could help?
Why does the proposed model perform worse than a model-free PPO baseline when trained to convergence on the control task? What is missing to close this gap?
Do all objects have the same appearance (color/greyscale values) or are they unique in appearance? In the second case, a simpler encoder architecture could be used such as in Jaques et al. (2019) or Xu et al. (ICLR 2019).
Overall, I think that this paper addresses an important issue and is potentially of high interest to the community. Nonetheless I think that this paper needs a bit more work and at this point I recommend a weak reject.
Other comments:
* This sentence is unclear to me: “An additional benefit of this approach is that the information learned by the dynamics model is reused for inference — […]”
* What are the failure modes of the model? Where does it break down?
* How does the model deal with partial occlusion?
---------------------
UPDATE (after reading the author response and the revised manuscript): My questions and comments are addressed and the additional ablation studies and experimental results on energy conservation are convincing and insightful. I think the revised version of the paper meets the bar for acceptance at ICLR. |
iclr_2020_r1xZAkrFPr | Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversityaccuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. | This paper analyzes ensembling methods in deep learning from the perspective of the loss landscapes. The authors empirically show that popular methods for learning Bayesian neural networks produce samples with limited diversity in the function space compared to modes of the loss found using different random initializations. The paper also considers the low-loss paths connecting independent local optima in the weight-space. The analysis shows that while the values of the loss and accuracy are nearly constant along the paths, the models corresponding to different points on a path define different functions with diverse predictions. The paper also demonstrates the complementary benefits of using subspace sampling/weight averaging in combination with deep ensembles and shows that relative benefits of deep ensembles are higher.
The paper is well-written. The experiments are described well and the results are presented clearly in highly-detailed and visually-appealing figures. There are occasional statements which are not formulated rigorously enough (see comments below).
The paper presents a thorough experimental study of different ensemble types, their performance, and function space diversity of individual members of an ensemble. In my view, the strongest contribution of the paper is the analysis of the diversity of the predictions for different sampling procedures in comparison to deep ensembles. However, the novelty and the significance of the other contributions are limited (see comments below). Therefore, I consider the paper to be below the acceptance threshold.
Comments and questions to authors:
1) The practical aspects of different ensembling techniques are not discussed in the paper. While it is known that deep ensembles generally demonstrate stronger performance [1], there is a trade-off between the ensemble performance and training time/memory consumption. The considered alternative ensembling procedures can be favorable in specialized settings (e.g. limited training time and/or memory).
2) It remains unclear to me what new insights does the analysis of the low-loss connectors provide? It is expected (and in fact can be shown analytically) that if the two modes define different functions then intermediate points on a continuous path define functions which are different from those defined by the end-points of the path. This result was also analyzed before from the perspective of the performance of ensembles formed by the intermediate points on the connecting paths (see Fig. 2 right in [2]).
Moreover, I would encourage authors to reformulate the statements on the connectivity in the function space such as:
-- “We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions.” (Abstract)
-- “the connectivity in the loss landscape does not imply connectivity in the space of functions” (Discussion)
In my opinion, these claims are somewhat misleading. What does it mean that the modes are disconnected in the function space? Neural networks define continuous functions (w.r.t to both the inputs and the weights), and a connector is continuous path in the weight space which continuously connects the modes in the function space (i.e. a path defines a homotopy between two functions). It is true that two modes correspond to two different functions. However, it is unclear in which sense these functions can be considered to be disconnected.
[1] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017.
[2] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. In NeurIPS, 2018. |
iclr_2020_Hkg9HgBYwH | We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global embedding with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and and melody. Empirically, we demonstrate the effectiveness of our method on a variety of music generation tasks on the MAESTRO dataset and an internal dataset with 10,000+ hours of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to relevant baselines. | This paper presents a technique for encoding the high level “style” of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global “style embedding”. Additionally, the Music Transformer model is also conditioned on a combination of both “style” and “melody” embeddings to try and generate music “similar” to the conditioning melody but in the style of the performance embedding.
Overall, I think the paper presents an interesting application and parts of it are well written, however I have concerns with the technical presentation in parts of the paper and some of the methodology. Firstly, I think the algorithmic novelty in the paper is fairly limited. The performance conditioning vector is generated by an additional encoding transformer, compared to the Music Transformer paper (Huang et. al. 2019b). However, the limited algorithmic novelty is not the main concern. The authors also mention an internal dataset of music audio and transcriptions, which can be a major contribution to the music information retrieval (MIR) community. However it is not clear if this dataset will be publicly released or is only for internal experiments.
In terms of technical presentation, I think the authors should clarify how the model is trained. It took me a couple of passes and reading the Music Transformer paper to realise that in the melody and performance conditioning case, the aim is to generate the full score (melody and accompaniment) while conditioning on the performance style and melody (which is represented using a different vocabulary). This point can be easily clarified in Figure 1, by adding the input to the encoder as input to the decoder for computing the loss. Although I understand the need for anonymity and constraints while referring to unreleased datasets, it would still be useful for the reader/reviewer to have some details of how the melody was extracted and represented. “An internal procedure” is quite mysterious.
Measuring music similarity is a difficult problem and the topic has been the subject of at least 2 decades of research. I find the description of the “performance feature” to be lacking in necessary background and detail. Firstly, I am not sure what the final dimensionality of the feature vector is. Is it real valued? The authors mention (Yang and Lerch, 2018) but use a totally different set of attributes compared to that paper. I also don’t see the connection between this proposed feature vector and using the IMQ kernel for measuring similarity. This connection is not motivated adequately and after reading (Jitkrittum et. al. 2019) its not obvious to me why this is the most appropriate metric. Finally, it would be useful if the authors comment on existing methods for measuring music similarity in symbolic music and how their proposed feature fits into existing work. A lot of work has been published on this topic, most recently in the context of Query-by-Humming [1].
Minor Comments
1. “...which typically incorporate global conditioning as part pf the training procedure” Could you elaborate on this point? Is the global conditioning the samples from the noise distribution?
2. Figure 1 should be clarified or another figure should be added to show how the melody conditioning works. Maybe a comment on the melody vocabulary or a reference would also be useful.
3. The MAESTRO dataset is described in terms of the number of performances while the internal dataset is described in terms of the number of hours of audio. Its not possible for the reader to get a sense of the relative sizes of the 2 datasets and how the results should be interpreted.
4. There should be more background and description in Section 4. Where does the performance feature come from? Why use this feature compared to existing techniques for measuring similarity between symbolic music pieces? Is it computational efficiency? Why not compare the conditioning melody with the generated performance similar to query-by-humming? Where does the IMQ kernel come from? What is the size of the feature vector?
5. In section 5.2, a conditioning sample, a generated sequence and an unconditional sample are used to compute the similarity measure. Which terms do these correspond to in the MMD-like term (x,y,y’)?
6. I like the experiments performed in Section 5.3 with the linear combination of 2 performance embeddings.
[1] A Survey of Query-By-Humming Similarity Methods: http://vlm1.uta.edu/~athitsos/publications/kotsifakos_petra2012.pdf |
iclr_2020_H1efEp4Yvr | Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition. These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data. Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy. We show that in an important class of discrete choice models the value function is globally concave in the policy. That means that simple algorithms that do not require fixed point computation, such as the policy gradient algorithm, globally converge to the optimal policy. This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior. In particular, we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing "nested fixed point" algorithms used in Econometrics. | This paper deals with a certain class of models, known as discrete choice models. These models are popular in econometrics, and aim at modelling the complex behavioural patterns of individuals or firms. Entities in these models are typically modelled as rational agents, that behave optimally for reaching their goal of maximizing a certain objective function such as maximizing expected cumulative discounted payoff over a fixed period.
This class of models, modelled as a MDP with choice-specific heterogeneity, is challenging as not all the payoffs received by the agents is externally observable. One solution in this case is finding a balance condition and a functional fixed point to find the optimal policy (closely related to value function iteration), and this is apparently the key idea behind ‘nested fixed point’ methods used in Econometrics.
The paper proposes an alternative. First it identifies a subclass of discrete choice models (essentially MDP with stochastic rewards) where the value function is globally concave in the policy. The consequence of this observation is that a direct method, such as the policy gradient that circumvents explicitly estimating the value function, can (at least in principle) converge to the optimal policy without calculating a fixed point. The authors illustrate computational advantages of this direct approach. Moreover, the generality of the policy gradient method enables the relaxation of extra assumptions regarding the behaviour of agents while facilitating a wider applicability/econometric analysis.
The key contribution claimed by the paper is the observation that in the class of dynamic discrete choice models with unobserved heterogeneity, the value function is globally concave in the policy. This enables using computationally efficient policy gradient algorithms with convergence guarantees for this class of problems. The authors also claim that the simplicity of policy gradient makes it also a viable model for understanding economic behaviour in econometric analysis, and more broadly for social sciences.
The paper deals with Discrete choice models with unobserved heterogeneity as a special class of MDP’s and is relevant to ICLR. However, the writing style is quite technical and terse -- while I could appreciate the rigour, the authors develop the basic material until page 5 -- Notation is also somewhat non-standard at places (alternating using delta or sigma for a policy and pi for thresholds of an exponential softmax distribution) and makes it harder to see the additional structure from generic MDPs more familiar in RL. I suspect that a reader more familiar with the relevant econometric, marketing and finance literature could follow the model description more easily.
There are a number of assumptions in the paper, especially 2.4 and 2.5 that relate to the monotonicity and ordering of the states. These assumptions seem to be important in subsequent developments for showing the concavity but they seem to be coming from out of the blue. Unfortunately the authors do not provide any intuition/discussion -- an example problem with these properties would make these assumptions more concrete. I was hoping to find such an example in the empirical application however this section does not make the necessary connections with the theoretical development. There are not even references to basic claims
done in the abstract, for example, I am not able to find an illustration of ‘significant computational advantages in using a simple implementation policy gradient algorithm over existing “nested fixed point” algorithms used in Econometrics’. The lack of any conclusions makes it also hard for me to appreciate the contributions.
Minor:
Abstract:
.. Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy. …
Ambiguous sentence: Existing work in Econometrics [...] requires finding a functional fixed point to find the optimal policy.
Theorem 3.1 and elsewhere Freéchet => Fréchet |
iclr_2020_rklnA34twH | Adversarial attacks were shown to be very effective in degrading the performance of neural networks. By slightly modifying the input, an almost identical input is misclassified by the network. To address this problem, we adopt the universal learning framework. In particular, we follow the recently suggested Predictive Normalized Maximum Likelihood (pNML) scheme for universal learning, whose goal is to optimally compete with a reference learner that knows the true label of the test sample but is restricted to use a learner from a given hypothesis class. In our case, the reference learner is using his knowledge on the true test label to perform minor refinements to the adversarial input. This reference learner achieves perfect results on any adversarial input. The proposed strategy is designed to be as close as possible to the reference learner in the worst-case scenario. Specifically, the defense essentially refines the test data according to the different hypotheses, where each hypothesis assumes a different label for the sample. Then by comparing the resulting hypotheses probabilities, we predict the label and detect whether the sample is adversarial or natural. Combining our method with adversarial training we create a robust scheme which can handle adversarial input along with detection of the attack. The resulting scheme is demonstrated empirically. | Summary:
This paper focuses on the area of adversarial defense -- both to improve robustness and to detect adversarially perturbed images. They approach the problem from the universal prediction / universal learning framework.
Modivation:
I had a hard time understanding the motivation of this work -- specifically the connection to the universal learning framework which to be fair I am unfamiliar with. In the absence of the universal learning framework formalism the method proposed here is quite simple and in my opinion clever -- create a new prediction based on performing an adversarial attack to each target class. What value does universal learning bring to this?
Second, I do not follow the intuition for the chosen hypothesis class -- why work off of refined images in the first place? Is there some reason to believe this will improve robustness?
Finally, the view that adversarial defense and attack are important topics to explore is under some debate. I am not considering this as part of my review but I would encourage the authors to look at [1].
Writing:
The writing was clear and typo free.
Experiments:
Overall the experiments seemed inconclusive.
Section 5 shows robustness against the unmodified / unrefined model (the attacks are done on the base model not the refined model). Given that these attacks are performed against the unmodified model then evaluated on the modified model the results seem a bit unfair / harder to interpret. The authors note this, and in Section 6 explore the "Adaptive Adversary" setting.
The results presented are performed on Mnist and Cifar10. Overall the results were not convincing to me. Table 1 shows mixed performance -- a drop in natural accuracy in all cases, decreases in FGSM. The main increase in performance is in the PGD. This was noted, but understanding in more depth why this method helps here will hopefully lead to improved performance in FGSM as well.
Figure 2a shows very weak correlations. Figure 2b seems promising but also not necessarily a surprise given that the adversarial examples are generated against the base model and not the refined model.
For section 6, one risk is that the BPDA attack doesn't successfully work. Having some more proof that the attacks presented here are strong would greatly improve the work.
Larger scale experiments would of course be nice and strengthen the paper but more importantly it would be great to see some form of toy example or demonstration of the principle improving robustness as well over just results. Something to probe the mechanism of action for example.
Finally, having some comparisons to other defense strategies would improve this paper.
Rating:
Given the gap between the universal learning framework and the method proposed, as well as the inconclusive experiments at this point I would not recommend the paper for acceptance.
[1] https://arxiv.org/abs/1807.06732 |