id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2018_ByKWUeWA-
GANITE: ESTIMATION OF INDIVIDUALIZED TREAT- MENT EFFECTS USING GENERATIVE ADVERSARIAL NETS Estimating individualized treatment effects (ITE) is a challenging task due to the need for an individual's potential outcomes to be learned from biased data and without having access to the counterfactuals. We propose a novel method for inferring ITE based on the Generative Adversarial Nets (GANs) framework. Our method, termed Generative Adversarial Nets for inference of Individualized Treatment Effects (GANITE), is motivated by the possibility that we can capture the uncertainty in the counterfactual distributions by attempting to learn them using a GAN. We generate proxies of the counterfactual outcomes using a counterfactual generator, G, and then pass these proxies to an ITE generator, I, in order to train it. By modeling both of these using the GAN framework, we are able to infer based on the factual data, while still accounting for the unseen counterfactuals. We test our method on three real-world datasets (with both binary and multiple treatments) and show that GANITE outperforms state-of-the-art methods.
This paper presents GANITE, a Generative Adversarial Network (GAN) approach for estimating Individualized Treatment Effects (ITE). This is achieved by utilising a GAN to impute the `missing` counterfactuals, i.e. the outcomes of the treatments that were not observed in the training (i.e. factual) sample, and then using another GAN to estimate the ITE based on this `complete` dataset. The authors then proceed in combining the two GAN objectives with extra supervised losses to better account for the observed data; the GAN loss for the `G` network has an extra term for the `G` network to better predict the factual outcome `y_f` (which should be easy to do given the fact that y_f is an input to the network) and the GAN loss for the `I` network has an extra term w.r.t. the corresponding performance metric used for evaluation, i.e. PEHE for binary treatment and MSE for multiple treatments. This model is then evaluated on extensive experiments. The paper is reasonably well-written with clear background and diagrams for the overall architecture. The idea is novel and seems to be relatively effective in practice although I do believe that it has a lot of moving parts and introduces a considerable amount of hyperameters (which generally are problematic to tune in causal inference tasks). Other than that, I have the following questions and remarks: - I might have misunderstood the motivation but the GAN objective for the `G` network is a bit weird; why is it a good idea to push the counterfactual outcomes close to the factual outcomes (which is what the GAN objective is aiming for)? Intuitively, I would expect that different treatments should have different outcomes and the distribution of the factual and counterfactual `y` should differ. - According to which metric did you perform hyper-parameter optimization on all of the experiments? - From the first toy experiment that highlights the importance of each of the losses it seems that the addition of the supervised loss greatly boosts the performance, compared to just using the GAN objectives. What was the relative weighting on those losses in general? - From what I understand the `I` network is necessary for out-of-sample predictions where you don’t have the treatment assignment, but for within sample prediction you can also use the `G` network. What is the performance gap between the `I` and `G` networks on the within-sample set? Furthermore, have you experimented with constructing `G` in a way that can represent `I` by just zeroing the contribution of `y_f` and `t`? In this way you can tie the parameters and avoid the two-step process (since `G` and `I` represent similar things). - For figure 2 what was the hyper parameters for CFR? CFR includes a specific knob to account for the larger mismatches between treated and control distributions. Did you do hyper-parameter tuning for all of the methods in this task? - I would also suggest to not use “between” when referring to the KL-divergence as it is not a symmetric quantity. Also it should be pointed out that for IHDP the standard evaluation protocol is 1000 replications (rather than 100) so there might be some discrepancy on the scores due to that.
iclr_2018_SJzmJEq6W
This paper proposes a novel approach for learning discriminative and sparse representations. It consists of utilizing two different models. A predefined number of non-linear transform models are used in the learning stage, and one sparsifying transform model is used at test time. The non-linear transform models have discriminative and minimum information loss priors. A novel measure related to the discriminative prior is proposed and defined on the support intersection for the transform representations. The minimum information loss prior is expressed as a constraint on the conditioning and the expected coherence of the transform matrix. An equivalence between the non-linear models and the sparsifying model is shown only when the measure that is used to define the discriminative prior goes to zero. An approximation of the measure used in the discriminative prior is addressed, connecting it to a similarity concentration. To quantify the discriminative properties of the transform representation, we introduce another measure and present its bounds. Reflecting the discriminative quality of the transform representation we name it as discrimination power. To support and validate the theoretical analysis a practical learning algorithm is presented. We evaluate the advantages and the potential of the proposed algorithm by a computer simulation. A favorable performance is shown considering the execution time, the quality of the representation, measured by the discrimination power and the recognition accuracy in comparison with the state-of-the-art methods of the same category.
Summary: The paper proposes a model to estimate a non-linear transform of data with labels, trying to increase the discriminative power of the transformation while preserving information. Quality: The quality is potentially good but I misunderstood too many things (see below) to emit a confident judgement. Clarity: Clarity is poor. The paper is overall difficult to follow (at least for me) for different reasons. First, with 17 pages + references + appendix, the length of the paper is way above the « strongly suggested limit of 8 pages ». Second, there are a number of small typos and the citations are not well formatted (use \citep instead of \citet). Third, and more importantly, several concepts are only vaguely formulated (or wrong), and I must say I certainly misunderstood some parts of the manuscript. A few examples of things that should be clarified: - p4: « per class c there exists an unknown nonlinear function […] that separates the data samples from different classes in the transformed domain ». What does « separate » mean here? There exists as many functions as there are classes, do you mean that each function separates all classes? Or that when you apply each class function to the elements of its own class, you get a separation between the classes? (which would not be very useful) - p5: what do you mean exactly by « non-linear thresholding function »? - p5: « The goal of learning a nonlinear transform (2) is to estimate… »: not sure what you mean by « accurate approximation » in this sentence. - p5 equation 3: if I understand correctly, you not only want to estimate a single vector of thresholds for all classes, but also want it to be constant. Is there a reason for the second constraint? - p5 After equation 4, you define different « priors ». The one on z_k seems to be Gaussian; however in equation 4, z_k seems to be the difference between the input and the output of the nonlinear threshold operator. If this is correct, then the nonlinear threshold operator is just the addition of a Gaussian random noise, which is really not a thresholding operator. I suppose I misunderstood something here, some clarification is probably needed. - p5 before equation 5, you define a conditional distribution of \tau_c given y_{c,k} ; however if you define a prior on \tau_c given each point (i.e., each $k$), how do you define the law of \tau_c given all points? - p5 Equation 6: I dont understand what the word « approximation » refers to, and how equation 6 is derived. -p6 equation 7 : missing exponent in the Gaussian distribution - p6-7: to combine equations 7-8-9 and obtain 10, I suppose in equation 7 the distributions should be conditioned on A (at least the first one), and in equation 9 I suppose the second line should be removed and the third line should be conditioned on A; otherwise more explanations are needed. - Lemma 1 is just unreadable. Please at least split equations in several lines. Originality: As far as I can tell, the approach is quite original and the results proved are new. Significance: The method provides a new way to learn discriminative features; as such it is a variant of several existing methods, which could have some impact if clarity is improved and a public code is provided.
iclr_2018_HkanP0lRW
The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an inthe-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.
This paper explores the use of neural networks for classification and segmentation of hypersepctral imaging (HSI) of cells. The basic set-up of the method and results seem correct and will be useful to the specific application described. While the narrow scope might limit this work's significance, my main issue with the paper is that while the authors describe prior art in terms of HSI for biological preps, there is a very rich literature addressing HSI images in other domains: in particular for remote sensing. I think that this work can (1) be a lot clearer as to the novelty and (2) have a much bigger impact if this literature is addressed. In particular, there are a number of methods of supervised and unsupervised feature extraction used for classification purposes (e.g. endmember extraction or dictionary learning). It would be great to know how the features extracted from the neural network compare to these methods, as well as how the classification performance compares to typical methods, such as performing SVM classification in the feature space. The comparison with the random forests is nice, but that is not a standard method. Putting the presented work in context for these other methods would help make there results more general, and hopefully increase the applicability to more general HSI data (thus increasing significance). An additional place where this comparison to the larger HSI literature would be useful is in the section where the authors describe the use of the network weights to isolate sub-bands that are more informative than others. Given the high correlation in many spectra, typically something like random sampling might be sufficient (as in compressive sensing). This type of compression which can be applied at the sensor -- a benefit the authors mention of their band-wise sub-sampling. It would be good to acknowledge this prior work and to understand if the features from the network are superior to the random sampling scheme. For these comments, I suggest the authors look at the following papers (and especially the references therein): [1] Li, Chengbo, et al. "A compressive sensing and unmixing scheme for hyperspectral data processing." IEEE Transactions on Image Processing 21.3 (2012): 1200-1210. [2] Bioucas-Dias, José M., et al. "Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches." IEEE journal of selected topics in applied earth observations and remote sensing 5.2 (2012): 354-379. [3] Charles, Adam S., Bruno A. Olshausen, and Christopher J. Rozell. "Learning sparse codes for hyperspectral imagery." IEEE Journal of Selected Topics in Signal Processing 5.5 (2011): 963-978.
iclr_2018_H1vEXaxA-
Published as a conference paper at ICLR 2018 EMERGENT TRANSLATION IN MULTI-AGENT COMMUNICATION While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans. In this work, we propose a communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. We find that the ability to understand and translate a foreign language emerges as a means to achieve shared goals. The emergent translation is interactive and multimodal, and crucially does not require parallel corpora, but only monolingual, independent text and corresponding images. Our proposed translation model achieves this by grounding the source and target languages into a shared visual modality, and outperforms several baselines on both word-level and sentence-level translation tasks. Furthermore, we show that agents in a multilingual community learn to translate better and faster than in a bilingual communication setting.
Summary: The authors show that using visual modality as a pivot they can train a model to translate from L1 to L2. Please find my detailed comments/questions/suggestions below: 1) IMO, the paper could have been written much better. At the core, this is simply a model which uses images as a pivot for learning to translate between L1 and L2 by learning a common representation space for {L1, image} or {L2, image}. There are several works on such multimodal representation learning but the authors present their work in a way which makes it look very different from these works. IMO, this leads to unnecessary confusion and does more harm than good. For example, the abstract gives an impression that the authors have designed a game to collect data (and it took me a while to set this confusion aside). 2) Continuing on the above point, this is essentially about learning a common multimodal representation and then decode from this common representation. However, the authors do not cite enough work on such multimodal representation learning (for example, look at Spandana et. al.: Image Pivoting for Learning Multilingual Multimodal Representations, EMNLP 2017 for a good set of references) 3) This omission of related work also weakens the experimental section. At least for the word translation task many of these common representation learning frameworks could have been easily evaluated. For example, find the nearest german neighbour of the word "dog" in the common representation space. The authors instead compare with very simple baselines. 4) Even when comparing with simple baselines, the proposed model does not convincingly outperform them. In particular, the P@5 and P@20 numbers are only slightly better. 5) Some of the choices made in the Experimental setup seem questionable to me: - Why use a NMT model without attention? That is not standard and does not make sense to use when a better baseline model (with attention) is available ? - It is mentioned that "While their model unit-normalizes the output of every encoder, we found this to consistently hurt performance, so do not use normalization for fair comparison with our models." I don't think this is a fair comparison. The authors can mention their results without normalization if that works well for them but it is not fair to drop normalization from the model of N&N if that gives better performance. Please mention the numbers with unit normalization to give a better picture. It does not make sense to weaken an existing baseline and then compare with it. 6) It would be good to mention the results of the NMT model in Table 1 itself instead of mentioning them separately in a paragraph. This again leads to poor readability and it is hard to read and compare the corresponding numbers from Table 1. I am not sure why this cannot be accommodated in the Table itself. 7) In Figure 2, what exactly do you mean by "Results are averaged over 30 translation scenarios". Can you please elaborate ?
iclr_2018_HJYoqzbC-
Despite many second-order methods have been proposed to train neural networks, most of the results were done on smaller single layer fully connected networks, so we still cannot conclude whether it's useful in training deep convolutional networks. In this study, we conduct extensive experiments to answer the question "whether second-order method is useful for deep learning?". In our analysis, we find out although currently second-order methods are too slow to be applied in practice, it can reduce training loss in fewer number of iterations compared with SGD. In addition, we have the following interesting findings: (1) When using a large batch size, inexact-Newton methods will converge much faster than SGD. Therefore inexact-Newton method could be a better choice in distributed training of deep networks. (2) Quasi-newton methods are competitive with SGD even when using ReLu activation function (which has no curvature) on residual networks. However, current methods are too sensitive to parameters and not easy to tune for different settings. Therefore, quasi-newton methods with more selfadjusting mechanisms might be more useful than SGD in training deeper networks.
The paper conducts an empirical study on 2nd-order algorithms for deep learning, in particular on CNNs to answer the question whether 2nd-order methods are useful for deep learning. More modestly and realistically, the authors compared stochastic Newton method (SHG) and stochastic Quasi- Newton method (SR1, SQN) with stochastic gradient method (SGD). The activation function ReLu is known to be singular at 0, which may lead to poor curvature information, but the authors gave a good numerical comparison between the performances of 2nd-order methods with ReLu and the smooth function, Tanh. The paper presented a reasonably good overview of existing 2nd-order methods, with clear numerical examples and reasonably well written. The paper presents several interesting empirical findings, which will no doubt lead to follow up work. However, there are also a few critical issues that may undermine their claims, and that need to be addressed before we can really answer the original question of whether 2nd-order methods are useful for deep learning. 1. There is no complexity comparison, e.g. what is the complexity for a single step of different method. 2. Relatedly, the paper reports the performance over epochs, but it is not clear what "per epoch" means for 2nd-order methods. In particular, it seems to me that they did not count the inner CG iterations, and it is known that this is crucial in running time and important for quality. If so, then the comparison between 1st-order and 2nd-order methods are not fair or incomplete. 3. The results on 2nd-order methods behave similarly to 1st-order methods, which makes me wonder how many CG iterations they used for 2nd-order method in their experiment, and also the details of the data. In particular, are they looking at parameter/hyperparameter settings for which 2nd-order methods aren't really necessary. 4. In deep learning setting, the training objective is non-convex, which means the Hessian can be non-PSD. It is not clear how the stochastic inexact-Newton method mentioned in Section 2.1 could work. Details on implementations of 2nd-order methods are important here. 5. For 2nd-order methods, the author used line search to tune the step size. It is not clear in the line search, the author used the whole training objective or batch loss. Assuming using the batch loss, I suspect the training curve will be very noisy (depending on how large the batch size is). But the paper only show the average training curves, which might be misleading. Here are other points. 1. There is no figure showing training/ test accuracy. Aside from being interested in test error, it is also of interest to see how 2nd order methods are similar/different than 1st order methods on training versus test. 2. Since it is a comparison paper, it only compares three 2nd-order methods with SGD. The choices made were reasonable, but 2nd-order methods are not as trivial to implement as SGD, and it isn't clear whether they have really "spanned the space" of second order methods 3. In the paper, the settings of LeNet, AlexNet are different with those in the original paper. The authors did not give a reason. 4. The quality of figures is not good. 5. The setting of optimization is not clear, e.g. the learning rate of SGD, the parameter of backtrack line search. It's hard to reproduce results when these are not described.
iclr_2018_B1kIr-WRb
Many state-of-the-art word embedding techniques involve factorization of a cooccurrence based matrix. We aim to extend this approach by studying word embedding techniques that involve factorization of co-occurrence based tensors (Nway arrays). We present two new word embedding techniques based on tensor factorization, and show that they outperform multiple commonly used embedding methods when used for various NLP tasks on the same training data. Also, to train one of the embeddings, we present a new joint tensor factorization problem and an approach for solving it. Furthermore, we modify the performance metrics for the Outlier Detection task (Camacho-Collados & Navigli (2016)) to measure the quality of higher-order relationships that a word embedding captures. Our tensor-based methods significantly outperform existing methods at this task when using our new metric. Finally, we demonstrate that vectors in our embeddings can be composed multiplicatively to create different vector representations for each meaning of a polysemous word in a way that cannot be done with other common embeddings. We show that this property stems from the higher order information that the vectors contain, and thus is unique to our tensor based embeddings.
The paper presents the word embedding technique which consists of: (a) construction of a positive (i.e. with truncated negative values) pointwise mutual information order-3 tensor for triples of words in a sentence and (b) symmetric tensor CP factorization of this tensor. The authors propose the CP-S (stands for symmetric CP decomposition) approach which tackles such factorization in a "batch" manner by considering small random subsets of the original tensor. They also consider the JCP-S approach, where the ALS (alternating least squares) objective is represented as the joint objective of the matrix and order-3 tensor ALS objectives. The approach is evaluated experimentally on several tasks such as outlier detection, supervised analogy recovery, and sentiment analysis tasks. CLARITY: The paper is very well written and is easy to follow. However, some implementation details are missing, which makes it difficult to assess the quality of the experimental results. QUALITY: I understand that the main emphasis of this work is on developing faster computational algorithms, which would handle large scale problems, for factorizing this tensor. However, I have several concerns about the algorithms proposed in this paper: - First of all, I do not see why using small random subsets of the original tensor would give a desirable factorization. Indeed, a CP decomposition of a tensor can not be reconstructed from CP decompositions of its subtensors. Note that there is a difference between batch methods in stochastic optimization where batches are composed of a subset of observations (which then leads to an approximation of desirable quantities, e.g. the gradient, in expectation) and the current approach where subtensors are considered as batches. I would expect some further elaboration of this question in the paper. Although similar methods appeared in the tensor literature before, I don't see any theoretical ground for their correctness. - Second, there is a significant difference between the symmetric CP tensor decomposition and the non-negative symmetric CP tensor decomposition. In particular, the latter problem is well posed and has good properties (see, e.g., Lim, Comon. Nonengative approximations of nonnegative tensors (2009)). However, this is not the case for the former (see, e.g., Comon et al., 2008 as cited in this paper). Therefore, (a) computing the symmetric and not non-negative symmetric decomposition does not give any good theoretical guarantees (while achieving such guarantees seems to be one of the motivations of this paper) and (b) although the tensor is non-negative, its symmetric factorization is not guaranteed to be non-negative and further elaboration of this issue seem to be important to me. - Third, the authors claim that one of their goals is an experimental exploration of tensor factorization approaches with provable guarantees applied to the word embedding problem. This is an important question that has not been addressed in the literature and is clearly a pro of the paper. However, it seems to me that this goal is not fully implemented. Indeed, (a) I mentioned in the previous paragraph the issues with the symmetric CP decomposition and (b) although the paper is motivated by the recent algorithm proposed by Sharan&Valiant (2017), the algorithms proposed in this paper are not based on this or other known algorithms with theoretical guarantees. This is therefore confusing and I would be interested in the author's point of view to this issue. - Further, the proposed joint approach, where the second and third order information are combined requires further analysis. Indeed, in the current formulation the objective is completely dominated by the order-3 tensor factor, because it contributes O(d^3) terms to the objective vs O(d^2) terms contributed by the matrix part. It would be interesting to see further elaboration of the pros and cons of such problem formulation. - Minor comment. In the shifted PMI section, the authors mention the parameter alpha and set specific values of this parameter based on experiments. However, I don't think that enough information is provided, because, given the author's approach, the value of this parameter most probably depends on other parameters, such as the bach size. - Finally, although the empirical evaluation is quite extensive and outperforms the state-of the art, I think it would be important to compare the proposed algorithm to other tensor factorization approaches mentioned above. ORIGINALITY: The idea of using a pointwise mutual information tensor for word embeddings is not new, but the authors fairly cite all the relevant literature. My understanding is that the main novelty is the proposed tensor factorization algorithm and extensive experimental evaluation. However, such batch approaches for tensor factorization are not new and I am quite skeptical about their correctness (see above). The experimental evaluation presents indeed interesting results. However, I think it would also be important to compare to other tensor factorization approaches. I would also be quite interested to see the performance of the proposed algorithm for different values of parameters (such as the butch size). SIGNIFICANCE: I think the paper addresses very interesting problem and significant amount of work is done towards the evaluation, but there are some further important questions that should be answered before the paper can be published. To summarize, the following are the pros of the paper: - clarity and good presentation; - good overview of the related literature; - extensive experimental comparison and good experimental results. While the following are the cons: - the mentioned issues with the proposed algorithm, which in particular does not have any theoretical guarantees; - lack of details on how experimental results were obtained, in particular, lack of the details on the values of the free parameters in the proposed algorithm; - lack of comparison to other tensor approaches to the word embedding problem (i.e. other algorithms for the tensor decomposition subproblem); - the novelty of the approach is somewhat limited, although the idea of the extensive experimental comparison is good.
iclr_2018_BkXmYfbAZ
Published as a conference paper at ICLR 2018 BEYOND SHARED HIERARCHIES: DEEP MULTITASK LEARNING THROUGH SOFT LAYER ORDERING Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.
Summary: This paper proposes a different approach to deep multi-task learning using “soft ordering.” Multi-task learning encourages the sharing of learned representations across tasks, thus using less parameters and tasks help transfer useful knowledge across. Thus enabling the reuse of universally learned representations and reuse them by assembling them in novel ways for new unseen tasks. The idea of “soft ordering” enforces the idea that there shall not be a rigid structure for all the tasks, but a soft structure would make the models more generalizable and modular. The methods reviewed prior work which the authors refer to as “parallel order”, which assumed that subsequences of the feature hierarchy align across tasks and sharing between tasks occurs only at aligned depths whereas in this work the authors argue that this shouldn’t be the case. They authors then extend the approach to “permuted order” and finally present their proposed “soft ordering” approach. The authors argue that their proposed soft ordering approach increase the expressivity of the model while preserving the performance. The “soft ordering” approach simply enable task specific selection of layers, scaled with a learned scaling factor, to be combined in which order to result for the best performance for each task. The authors evaluate their approach on MNIST, UCI, Omniglot and CelebA datasets and compare their approach to “parallel ordering” and “permuted ordering” and show the performance gain. Positives: - The paper is clearly written and easy to follow - The idea is novel and impactful if its evaluated properly and consistently - The authors did a great job summarizing prior work and motivating their approach Negatives: - Multi-class classification problem is one incarnation of Multi-Task Learning, there are other problems where the tasks are different (classification and localization) or auxiliary (depth detection for navigation). CelebA dataset could have been a good platform for testing different tasks, attribute classification and landmark detection. 
 (TODO) I would recommend that the authors test their approach on such setting. - Figure 6 is a bit confusing, the authors do not explain why the “Permuted Order” performs worse than “Parallel Order”. Their assumptions and results as of this section should be consistent that soft order>permuted order>parallel order>single task. 
(TODO) I would suggest that the authors follow up on this result, which would be beneficial for the reader. - Figure 4(a) and 5(b), the results shown on validation loss, how about testing error similar to Figure 6(a)? How about results for CelebA dataset, it could be useful to visualize them as was done for MNIST, Omniglot and UCL.
 (TODO) I would suggest that the authors make the results consistent across all datasets and use the same metric such that its easy to compare. Notation and Typos: - Figure 2 is a bit confusing, how come the accuracy decreases with increasing number of training samples? Please clarify. 1- If I assume that the Y-Axis is incorrectly labeled and it is Training Error instead, then the permuted order is doing worse than the parallel order. 
2- If I assume that the X-Axis is incorrectly labeled and the numbering is reversed (start from max and ending at 0), then I think it would make sense. - Figure 4 is very small and not easy to read the text. Does single task mean average performance over the tasks? - In eq.(3) Choosing \sigma_i for a task-specific permutation of the network is a bit confusing, since it could be thought of as a sigmoid function, I suggest using a different symbol. 
Conclusion: I would suggest that the authors address the concerns mentioned above. Their approach and idea is very interesting and relevant, and addressing these suggestions will make the paper strong for publication.
iclr_2018_Hk99zCeAb
PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CELEBA images at 1024 2 . We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CELEBA dataset.
This paper proposes a number of ideas for improving GANs for image generation, highlighting in particular a curriculum learning strategy to progressively increase the resolution of the generated images, resulting in GAN generators capable of producing samples with unprecedented resolution and visual fidelity. Pros: The paper is well-written and the results speak for themselves! Qualitatively they’re an impressive and significant improvement over previous results from GANs and other generative models. The latent space interpolations shown in the video (especially on CelebA-HQ) demonstrate that the generator can smoothly transition between modes and convince me that it isn’t simply memorizing the training data. (Though I think this issue could be addressed a bit better -- see below.) Though quantification of GAN performance is difficult and rapidly evolving, there is a lot of quantitative analysis all pointing to significant improvements over previous methods. A number of new tricks are proposed, with the ablation study (tab 1 + fig 3) and learning curves (fig 4) giving insight into their effects on performance. Though the field is moving quickly, I expect that several of these tricks will be broadly adopted in future work at least in the short to medium term. The training code and data are released. Cons/Suggestions: It would be nice to see overfitting addressed and quantified in some way. For example, the proposed SWD metric could be recomputed both for the training and for a held-out validation/test set, with the difference between the two scores measuring the degree of overfitting. Similarly, Danihelka et al. [1] show that an independently trained Wasserstein critic (with one critic trained on G samples vs. train samples, and another trained on G samples vs. val samples) can be used to measure overfitting. Another way to go could be to generate a large number of samples and show the nearest neighbor for a few training set samples and for a few val set samples. Doing this in pixel space may not work well especially at the higher resolutions, but maybe a distance function in the space of some high-level hidden layer of a trained discriminator could show good semantic nearest neighbors. The proposed SWD metric is interesting and computationally convenient, but it’s not clear to me that it’s an improvement over previous metrics like the independent Wasserstein critic proposed in [1]. In particular the use of 7x7 patches would seem to limit the metric’s ability to capture the extent to which global structure has been learned, even though the patches are extracted at multiple levels of the Laplacian pyramid. The ablation study (tab 1 + fig 3) leaves me somewhat unsure which tricks contribute the most to the final performance improvement over previous work. Visually, the biggest individual improvement is easily when going from (c) to (d), which adds the “Revised training parameters”, with the improvement from (a) to (b) which adds the highlighted progressive training schedule appearing relatively minor in comparison. However, I realize the former large improvement is due to the arbitrary ordering of the additions in the ablation study, with the small minibatch addition in (c) crippling results on its own. Ablation studies with large numbers of tweaks are always difficult and this one is welcome and useful despite the ambiguity. On a related note, it would be nice if there were more details on the “revised training hyperparameters” improvement ((d) in the ablation study) -- which training hyperparameters are adjusted, and how? “LAPGAN” (Denton et al., 2015) should be cited as related work: LAPGAN’s idea of using a separate generator/discriminator at each level of a Laplacian pyramid conditioned on the previous level is quite relevant to the progressive training idea proposed here. Currently the paper is only incorrectly cited as “DCGAN” in a results table -- this should be fixed as well. Overall, this is a well-written paper with striking results and a solid effort to analyze, ablate, and quantify the effect of each of the many new techniques proposed. It’s likely that the paper will have a lot of impact on future GAN work. [1] Danihelka et al., “Comparison of Maximum Likelihood and GAN-based training of Real NVPs” https://arxiv.org/abs/1705.05263
iclr_2018_S18Su--CW
Published as a conference paper at ICLR 2018 THERMOMETER ENCODING: ONE HOT WAY TO RESIST ADVERSARIAL EXAMPLES It is well known that it is possible to construct "adversarial examples" for neural networks: inputs which are misclassified by the network yet indistinguishable from true data. We propose a simple modification to standard neural network architectures, thermometer encoding, which significantly increases the robustness of the network to adversarial examples. We demonstrate this robustness with experiments on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets, and show that models with thermometer-encoded inputs consistently have higher accuracy on adversarial examples, without decreasing generalization. State-of-the-art accuracy under the strongest known white-box attack was increased from 93.20% to 94.30% on MNIST and 50.00% to 79.16% on CIFAR-10. We explore the properties of these networks, providing evidence that thermometer encodings help neural networks to find more-non-linear decision boundaries.
The authors present an in-depth study of discretizing / quantizing the input as a defense against adversarial examples. The idea is that the threshold effects of discretization make it harder to find adversarial examples that only make small alterations of the image, but also that it introduces more non-linearities, which might increase robustness. In addition, discretization has little negative impact on the performance on clean data. The authors also propose a version of single-step or multi-step attacks against models that use discretized inputs, and present extensive experiments on MNIST, CIFAR-10, CIFAR-100 and SVHN, against standard baselines and, on MNIST and CIFAR-10, against a version of quantization in which the values are represented by a small number of bits. The merits of the paper is that the study is rather comprehensive: a large number of datasets were used, two types of discretization were tried, and the authors propose an attack mechanism better that seems reasonable considering the defense they consider. The two main claims of the paper, namely that discretization doesn't hurt performance on natural test examples and that better robustness (in the author's experimental setup) is achieved through the discretized encoding, are properly backed up by the experiments. Yet, the applicability of the method in practice is still to be demonstrated. The threshold effects might imply that small perturbations of the input (in the l_infty sense) will not have a large effect on their discritized version, but it may also go the other way: an opponent might be able to greatly change the discretized input without drastically changing the input. Figure 8 in the appendix is a bit worrysome on that point, as the performance of the discretized version drops rapidly to 0 when the opponents gets a bit stronger. Did the authors observe the same kind of bahavior on other datasets? What would the authors propose to mitigate this issue? To what extend the good results that are exhibited in the paper are valid over the wide range of opponent's strengths? minor comment: - the experiments on CIFAR-100 in Appendix E are carried out by mixing adversarial / clean examples while training, whereas those on SVHN in Appendix F use adversarial examples only.
iclr_2018_rkRwGg-0Z
Published as a conference paper at ICLR 2018 BEYOND WORD IMPORTANCE: CONTEXTUAL DE- COMPOSITION TO EXTRACT INTERACTIONS FROM LSTMS The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes. To this end, we introduce contextual decomposition (CD), an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model. By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM. On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction. Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done.
In this paper, the authors propose a new LSTM variant that allows to produce interpretations by capturing the contributions of the words to the final prediction and the way their learned representations are combined in order to yield that prediction. They propose a new approach that they call Contextual Decomposition (CD). Their approach consists of disambiguating interaction between LSTM’s gates where gates are linearized so the products between them is over linear sums of contributions from different factors. The hidden and cell states are also decomposed in terms of contributions to the “phrase” in question and contributions from elements outside of the phrase. The motivation of the proposed decomposition using LSTMs is that the latter are powerful at capturing complex non-linear interactions, so, it would be useful to observe how these interactions are handled and to interpret the LSTM’s predictions. As the authors intention is to build a way of interpreting LSTMs output and not to increase the model’s accuracy, the empirical results illustrate the ability of their decomposition of giving a plausible interpretation to the elements of a sentence. They compare their method with different existing method by illustrating samples from the Yelp Polarity and SST datasets. They also show the ability of separating the distribution of CD scores related to positive and negative phrases on respectively Yelp Polarity and SST. The proposed approach is potentially of great benefit as it is simple and elegant and could lead to new methods in the same direction of research. The sample illustrations, the scatter plots and the CD score distributions are helpful to asses the benefit of the proposed approach. The writing could be improved as it contains parts where it leads to confusion. The details related to the linearization (section 3.2.2), the training (4.1) could be improved. In equation 25, it is not clear what π_{i}^(-1) and x_{π_{i}} represent but the example in equation 26 makes it clearer. The section would be clearer if each index and notation used is explained explicitly. (CD in known for Contrastive Divergence in the deep learning community. It would be better if Contextual Decomposition is not referred by CD.) Training details are given in section 4.1 where the authors mention the integrated gradient baseline without mentioning the reference paper to it (however they do mention the reference paper at each of table 1 and 2). it would be clearer for the reader if the references are also mentioned in section 4.1 where integrated gradient is introduced. Along with the reference, a brief description of that baseline could be given. The “Leave one out” baseline is never mentioned in text before section 4.4 (and tables 1 and 2). Neither the reference nor the description of the baseline are given. It would have been clearer to the reader if this had been the case. Overall, the paper contribution is of great benefit. The quality of the paper could be improved if the above explanations and details are given explicitly in the text.
iclr_2018_ryj38zWRb
Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most applications, GAN models share two aspects in common. On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions. On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.
Summary: The authors observe that the success of GANs can be attributed to two factors; leveraging the inductive bias of deep CNNs and the adversarial training protocol. In order to disentangle the factors of success, and they propose to eliminate the adversarial training protocol while maintaining the first factor. The proposed Generative Latent Optimization (GLO) model maps a learnable noise vector to the real images of the dataset by minimizing a reconstruction loss. The experiments are conducted on CelebA and LSUN-Bedroom datasets. Strengths: The paper is well written and the topic is relevant for the community. The notations are clear, as far as I can tell, there are no technical errors. The design choices are well motivated in Chapter 2 which makes the main idea easy to grasp. The image reconstruction results are good. The experiments are conducted on two challenging datasets, i.e. CelebA and LSUN-Bedroom. Weaknesses: A relevant model is Generative Moment Matching Network (GMMN) which can also be thought of as a “discriminator-less GAN”. However, the paper does not contrast GLO with GMMN either in the conceptual level or experimentally. Another relevant model is Variational Autoencoders (VAE) which also learns the data distribution through a learnable latent representation by minimizing a reconstruction loss. The paper would be more convincing if it provided a comparison with VAE. In general, having no comparisons with other models proposed in the literature as improvements over GAN such as ACGAN, InfoGAN, WGAN weakens the experimental section. The evaluation protocol is quite weak: CelebA images are 128x128 while LSUN images are 64x64. Especially since it is a common practice nowadays to generate much higher dimensional images, i.e. 256x256, the results presented in this paper appear weak. Although the reconstruction examples (Figure 2 and 3) are good, the image generation results (Figure 4 and 5) are worse than GAN, i.e. the 3rd images in the 2nd row in Figure 4 for instance has unrealistic artifacts, the entire Figure 5 results are quite boxy and unrealistic. The authors mention in Section 3.3.2 that they leave the careful modeling of Z to future work, however the paper is quite incomplete without this. In Section 3.3.4, the authors claim that the latent space that GLO learns is interpretable. For example, smiling seems correlated with the hair color in Figure 6. This is a strong claim based on one example, moreover the evidence of this claim is not as obvious (based on the figure) to the reader. Moreover, in Figure 8, the authors claim that the principal components of the GLO latent space is interpretable. However it is not clear from this figure what each eigenvector generates. The authors’ observations on Figure 8 and 9 are not clearly visible through manual inspection. Finally, as a minor note, the paper has some vague statements such as “A linear interpolation in the noise space will generate a smooth interpolation of visually-appealing images” “Several works attempt at recovering the latent representation of an image with respect to a generator.” Therefore, a careful proofreading would improve the exposition.
iclr_2018_SybqeKgA-
Mini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm, and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods and methods that manually increase the batch size, after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing the batch adaptive framework is now open source, applicable to any gradient-based optimization problems.
The paper proposes a generalization of an algorithm by Yin et al. (2017), which performs SGD with adaptive batch sizes. The present paper generalizes the algorithm to SGD with momentum. Since the original algorithm was already formulated with a general utility function, the proposed algorithm is similar in structure but replaces the utility function so that it takes momentum into account. Experiments on an image classification task show improvements in the training loss. However, no test accuracies are reported and the learning curves have suspicious artifacts, see below. Experiments on a relation extraction task show little improvement over SGD with momentum and constant batch size. COMMENTS: The paper discusses a relevant issue. While adaptive learning algorithms are popular in deep learning, most algorithms adapt the learning rate or the momentum coefficient, but not the batch size. It appears to me that the main idea and the overall structure of the proposed algorithm is the same as in the one published by Yin et al. (2017), and that only few changes were necessary to include momentum. Given the incremental process, I find the presentation unnecessarily involved, and experiments not convincing enough. Concerning the presentation, the paper dedicates two full pages on a review of the algorithm by Yin et al. (2017). The first page of this review states that, for large enough batch sizes, the change of the objective function in SGD is normal distributed with a variance that is inversely proportional the batch size. It seems to me that this is a direct consequence of the central limit theorem. The derivation, however, is quite technical and introduces some quantities that are never used (e.g., $\vec{\xi}_j$ is never used individually, only the combined term $\epsilon_t$ defined below Eq. 12 is). The second page of the review seems to discuss the main part of the algorithm, but I could not follow it. First, a "state" $s_t$ (also written as $S$) is introduced, which, according to the text, is "the objective value", which was earlier denoted by $F$. Nevertheless, the change of $s_t$, Eq. 5, appears to obey a different probability distribution than the change of $F$. The paper provides a verbal explanation for this discrepancy, saying that it is possible that $S$ is first reduced to the minimum $S^*$ of the objective and then increased again. However, in my understanding, the minimum of the objective is only realized at a singular point in parameter space. Crossing this point in an update step should have zero probability as long as the model has more than one parameter. The explanation also does not make it clear why the argument should apply to $S$ (or $s$) but not to $F$. Page 5 provides pseudocode for the proposed algorithm. However, I couldn't find an explanation of the code. The code suggests that, for each update step, one gradually increases the batch size until it becomes larger or equal than a running estimate of the optimal batch size. While this may be a plausible strategy in practice, it seems to have a bias that is not addressed in the paper: the algorithm recalculates a noisy estimate of the optimal batch size after each increase of the batch size, and it terminates as soon as the noisy estimate happens to be small enough, resulting in a bias towards a smaller than optimal batch size. A probably more important issue is that the algorithm is sequential and hard to parallelize, where parallelization is usually the main motivation to use larger batch sizes. As the gradient noise scales inversely proportional to the batch size, I don't see why increasing the batch size should be preferred over decreasing the learning rate unless optimizations with a larger batch size can be parallelized. The experiments don't compare the two alternatives. Concerning the experiments, it seems peculiar that the learning curves in Figure 1 remain at a constant value for a long time at the beginning of the optimization before they begin to drop. Do the authors understand this behavior? It could indicate that the magnitude of the random initialization was chosen too small. I.e., the parameters might have been initialized too close to zero, where the loss is stationary due to symmetries. Also, absolute values of the training loss can be deceptive since there is often no natural scale. A better indicator of convergence would be the test accuracy. The identification of the "batch size boom" is interesting.
iclr_2018_Sk6fD5yCb
ESPRESSO: EFFICIENT FORWARD PROPAGATION FOR BINARY DEEP NEURAL NETWORKS There are many applications scenarios for which the computational performance and memory footprint of the prediction phase of Deep Neural Networks (DNNs) need to be optimized. Binary Deep Neural Networks (BDNNs) have been shown to be an effective way of achieving this objective. In this paper, we show how Convolutional Neural Networks (CNNs) can be implemented using binary representations. Espresso is a compact, yet powerful library written in C/CUDA that features all the functionalities required for the forward propagation of CNNs, in a binary file less than 400KB, without any external dependencies. Although it is mainly designed to take advantage of massive GPU parallelism, Espresso also provides an equivalent CPU implementation for CNNs. Espresso provides special convolutional and dense layers for BCNNs, leveraging bit-packing and bitwise computations for efficient execution. These techniques provide a speed-up of matrix-multiplication routines, and at the same time, reduce memory usage when storing parameters and activations. We experimentally show that Espresso is significantly faster than existing implementations of optimized binary neural networks (≈ 2 orders of magnitude). Espresso is released under the Apache 2.0 license and is available at http://github.com/fpeder/espresso.
The paper presents a library written in C/CUDA that features all the functionalities required for the forward propagation of BCNNs. The library is significantly faster than existing implementations of optimized binary neural networks (≈ 2 orders of magnitude), and will be released on github. BCNNs have been able to perform well on large-scale datasets with increased speed and decreased energy consumption, and implementing efficient kernels for them can be very useful for mobile applications. The paper describes three implementations CPU, GPU and GPU_opt, but it is not entirely clear what the differences are and why GPU_opt is faster than GPU implementation. Are BDNN and BCNN used to mean the same concept? If yes, could you please use only one of them? The subsection title “Training Espresso” should be changed to “Converting a network to Espresso”, or “Training a network for Espresso”. What is the main difference between GPU and GPU_opt implementations? The unrolling and lifting operations are shown in Figure 2. Isn’t accelerating convolution by this method a very well known one which is implemented in many deep learning frameworks for both CPU and GPU? What is the main contribution that makes the framework here faster than the other compared work? In Figure1, Espresso implementations are compared with other implementations in (a)dense binary matrix multiplication and (b)BMLP and not (c)BCNN. Can others ( BinaryNet (Hubara et al., 2016) or Intel Nervana/neon (NervanaSystems)) run CNNs? 6.2 MULTI-LAYER PERCEPTRON ON MNIST – FIGURE 1B AND FIGURE 1E. It should be Figure 1d instead of 1e? All in all, the novelty in this paper is not very clear to me. Is it bit-packing? UPDATE: Thank you for the revision and clarifications. I increase my rating to 6.
iclr_2018_H1pri9vTZ
In this paper we propose a generalization of deep neural networks called deep function machines (DFMs). DFMs act on vector spaces of arbitrary (possibly infinite) dimension and we show that a family of DFMs are invariant to the dimension of input data; that is, the parameterization of the model does not directly hinge on the quality of the input (eg. high resolution images). Using this generalization we provide a new theory of universal approximation of bounded non-linear operators between function spaces. We then suggest that DFMs provide an expressive framework for designing new neural network layer types with topological considerations in mind. Finally, we introduce a novel architecture, RippLeNet, for resolution invariant computer vision, which empirically achieves state of the art invariance.
This paper deals with the problem of learning nonlinear operators using deep learning. Specifically, the authors propose to extend deep neural networks to the case where hidden layers can be infinite-dimensional. They give results on the quality of the approximation using these operator networks, and show how to build neural network layers that are able to take into account topological information from data. Experiments on MNIST using the proposed deep function machines (DFM) are provided. The paper attempts to make progress in the region between deep learning and functional data analysis (FDA). This is interesting. Unfortunately, the paper requires significant improvements, both in terms of substance and in terms of presentation. My main concerns are the following: 1) One motivation of DFM is that in many applications data is a discretization of a continuous process and then can be represented by a function. FDA is the research field that formulated the ideas about the statistical data analysis of data samples consisting of continuous functions, where each function is viewed as one sample element. This paper fails to consider properly the work in its FDA context. Operator learning has been already studied in FDA. See for e.g. the problem of functional regression with functional responses. Indeed the functional model considered in the linear case is very similar to Eq. 2.5 or Eq. 3.2. Moreover, extension to nonparametric/nonlinear situations were also studied. The authors should add more information about previous work on this topic so that their results can be understood with respect to previous studies. 2) The computational aspects of DFM are not clear in the paper. From a practical computational perspective, the algorithm will be implemented on a machine which processes on finite representations of data. The paper does not clearly provide information about how the functional nature and the infinite dimensional can be handled in practice. In FDA, generally this is achieved via basis function approximations. 3) Some parts of the paper are hard to read. Sections 3 and 4 are not easy to understand. Maybe adding a section about the notation and developing more the intuition will improve the reading of the manuscript. 4) The experimental section can be significantly improved. It will be interesting to compare more DFM with its discrete counterpart. Also, other FDA approaches for operator learning should be discussed and compared to the proposed approach.
iclr_2018_rkvDssyRb
We consider tackling a single-agent RL problem by distributing it to n learners. These learners, called advisors, endeavour to solve the problem from a different focus. Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system. We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the egocentric planning overestimates values of states where the other advisors disagree, and the agnostic planning is inefficient around danger zones. We introduce a novel approach called empathic and discuss its theoretical aspects. We empirically examine and validate our theoretical findings on a fruit collection task.
Summary The paper is well-written but does not make deep technical contributions and does not present a comprehensive evaluation or highly insightful empirical results. Abstract / Intro I get the entire focus of the paper is some variant of Pac-Man which has received attention in the RL literature for Atari games, but for the most part the impressive advances of previous Atari/RL papers are in the setting that the raw video is provided as input, which is much different than solving the underlying clean mathematically abstracted problem (as a grid world with obstacles) as done here and evident in the videos. Further it is honestly hard for me to be strongly motivated about a paper that focuses on the need to decompose Pac-man into sub-agents/advisor value functions. Section 2 Another historically well-cited paper for MDP decomposition: Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems, Ronald Parr. UAI 98. https://dslpitt.org/uai/papers/98/p422-parr.pdf Section 3 Is the additive reward decomposition a required part of the problem specification? It seems so, i.e., there is no obvious method for automatically decomposing a monolithic reward function over advisors. Section 4 * Egocentric: Definition 1: Sure, the problem will have local optima (attractors) when decomposed suboptimally -- I'm not sure what new insight we've gained from this analysis... it is a general problem with any function approximation scheme that does not guarantee that the rank ordering of actions for a state is preserved. * Agnostic Other than approximating some type of myopic rollout, I really don't see why this approach would be reasonable? I am surprised it works at all though my guess is that this could simply be an artifact of evaluating on a single domain with a specific structure. * Empathic This appears to be the key contribution though related work certainly infringes on its novelty. Is this paper then an empirical evaluation of previous methods in a single Pac-man grid world variant? I wonder if the theory of DEC-MDPs would have any relevance for novel analysis here? Section 5 I'm disappointed that the authors only evaluate on a single domain; presumably the empathic approach has applications beyond Pac-Man? The fact that empathic generally performs better is not at all surprising. The fact that a modified discount factor for egocentric can also perform well is not surprising given that lower discount factors have often been shown to improve approximated MDP solutions, e.g., Biasing Approximate Dynamic Programming with a Lower Discount Factor Marek Petrik, Bruno Scherrer (NIPS-08). http://marek.petrik.us/pub/Petrik2009a.pdf *** Side note: The following part is somewhat orthogonal to the review above in that I would not expect the authors to address this on revision, *but* at the same time I think it provides a connection to the special case of concurrent action decomposition into advisors, which could potentially provide a high impact direction of application for this work (i.e., concurrent problems are hard and show up in numerous operations research problems covering inventory control, logistics, epidemic response). For the special case that each advisor is assigned to one action in a factored space of concurrent actions, the egocentric algorithm would be very close to the Hindsight approximation in Section 6 of this paper (including an additive decomposition of rewards): Planning in Factored Action Spaces with Symbolic Dynamic Programming Aswin Nadamuni Raghavan, Alan Fern, Prasad Tadepalli, Roni Khardon, and Saket Joshi (AAAI-12). https://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5012/5336 This simple algorithm is hard to beat for the following reason that connects some details of your egocentric and empathic settings: rather than decomposing a concurrent MDP into independent problems per concurrent action, the optimization of each action (by each advisor) is done in sequence (advisors are ordered) and gets to condition on the previously selected advisor actions. So it provides an alternate paradigm where advisors actually get to see and condition their policy on what other advisors are doing. In my own work comparing optimal concurrent solutions to this approach, I have found this approach to be near-optimal and much more efficient to solve since it exploits decomposition. Why is this relevant to this work? Because (a) it suggests another variant of the advisor decomposition that at least makes sense in the case of concurrent actions (and perhaps shared actions though this would require some extension) and (b) it suggests there are more options than just the full egocentric and empathic settings in this important class of concurrent action problems that are necessarily solved in practice for large action spaces by some form of decomposition. This could be an interesting direction for future exploration of the ideas in this work, where there might be additional technical novelty and more space for empirical contributions and observations.
iclr_2018_r1kj4ACp-
Deep learning achieves remarkable generalization capability with overwhelming number of model parameters. Theoretical understanding of deep learning generalization receives recent attention yet remains not fully explored. This paper attempts to provide an alternative understanding from the perspective of maximum entropy. We first derive two feature conditions that softmax regression strictly apply maximum entropy principle. DNN is then regarded as approximating the feature conditions with multilayer feature learning, and proved to be a recursive solution towards maximum entropy principle. The connection between DNN and maximum entropy well explains why typical designs such as shortcut and regularization improves model generalization, and provides instructions for future model development.
Summary: This paper presents a derivation which links a DNN to recursive application of maximum entropy model fitting. The mathematical notation is unclear, and in one cases the lemmas are circular (i.e. two lemmas each assume the other is correct for their proof). Additionally the main theorem requires complete independence, but the second theorem provides pairwise independence, and the two are not the same. Major comments: - The second condition of the maximum entropy equivalence theorem requires that all T are conditionally independent of Y. This statement is unclear, as it could mean pairwise independence, or it could mean jointly independent (i.e. for all pairs of non-overlapping subsets A & B of T I(T_A;T_B|Y) = 0). This is the same as saying the mapping X->T is making each dimension of T orthogonal, as otherwise it would introduce correlations. The proof of the theorem assumes that pairwise independence induces joint independence and this is not correct. - Section 4.1 makes an analogy to EM, but gradient descent is not like this process as all the parameters are updated at once, and only optimised by a single (noisy) step. The optimisation with respect to a single layer is conditional on all the other layers remaining fixed, but the gradient information is stale (as it knows about the previous step of the parameters in the layer above). This means that gradient descent does all 1..L steps in parallel, and this is different to the definition given. - The proofs in Appendix C which are used for the statement I(T_i;T_j) >= I(T_i;T_j|Y) are incomplete, and in generate this statement is not true, so requires proof. - Lemma 1 appears to assume Lemma 2, and Lemma 2 appears to assume Lemma 1. Either these lemmas are circular or the derivations of both of them are unclear. - In Lemma 3 what is the minimum taken over for the left hand side? Elsewhere the minimum is taken over T, but T does not appear on the left hand side. Explicit minimums help the reader to follow the logic, and implicit ones should only be used when it is obvious what the minimum is over. - In Lemma 5, what does "T is only related to X" mean? The proof states that Y -> T -> X forms a Markov chain, but this implies that T is a function of Y, not X. Minor comments: - I assume that the E_{P(X,Y)} notation is the expectation of that probability distribution, but this notation is uncommon, and should be replaced with a more explicit one. - Markov is usually romanized with a "k" not a "c". - The paper is missing numerous prepositions and articles, and contains multiple spelling mistakes & typos.
iclr_2018_B1zlp1bRW
LARGE-SCALE OPTIMAL TRANSPORT AND MAPPING ESTIMATION This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling.
This paper proposes a new method for estimating optimal transport plans and maps among continuous distributions, or discrete distributions with large support size. First, the paper proposes a dual algorithm to estimate Kantorovich plans, i.e. a coupling between two input distributions minimizing a given cost function, using dual functions parameterized as neural networks. Then an algorithm is given to convert a generic plan into a Monge map, a deterministic function from one domain to the other, following the barycenter of the plan. The algorithms are shown to be consistent, and demonstrated to be more efficient than an existing semi-dual algorithm. Initial applications to domain adaptation and generative modeling are also shown. These algorithms seem to be an improvement over the current state of the art for this problem setting, although more of a discussion of the relationship to the technique of Genevay et al. would be useful: how does your approach compare to the full-dual, continuous case of that paper if you simply replace their ball of RKHS functions with your class of deep networks? The consistency properties are nice, though they don't provide much insight into the rate at which epsilon should be decreased with n or similar properties. The proofs are clear, and seem correct on a superficial readthrough; I have not carefully verified them. The proofs are mainly limited in that they don't refer in any way to the class of approximating networks or the optimization algorithm, but rather only to the optimal solution. Although of course proving things about the actual outcomes of optimizing a deep network is extremely difficult, it would be helpful to have some kind of understanding of how the class of networks in use affects the solutions. In this way, your guarantees don't say much more than those of Arjovsky et al., who must assume that their "critic function" reaches the global optimum: essentially you add a regularization term, and show that as the regularization decreases it still works, but under seemingly the same kind of assumptions as Arjovsky et al.'s approach which does not add an explicit regularization term at all. Though it makes sense that your regularization might lead to a better estimator, you don't seem to have shown so either in theory or empirically. The performance comparison to the algorithm of Genevay et al. is somewhat limited: it is only on one particular problem, with three different hyperparameter settings. Also, since Genevay et al. propose using SAG for their algorithm, it seems strange to use plain SGD; how would the results compare if you used SAG (or SAGA/etc) for both algorithms? In discussing the domain adaptation results, you mention that the L2 regularization "works very well in practice," but don't highlight that although it slightly outperforms entropy regularization in two of the problems, it does substantially worse in the other. Do you have any guesses as to why this might be? For generative modeling: you do have guarantees that, *if* your optimization and function parameterization can reach the global optimum, you will obtain the best map relative to the cost function. But it seems that the extent of these guarantees are comparable to those of several other generative models, including WGANs, the Sinkhorn-based models of Genevay et al. (2017, https://arxiv.org/abs/1706.00292/), or e.g. with a different loss function the MMD-based models of Li, Swersky, and Zemel (ICML 2015) / Dziugaite, Roy, and Ghahramani (UAI 2015). The different setting than the fundamental GAN-like setup of those models is intriguing, but specifying a cost function between the source and the target domains feels exceedingly unnatural compared to specifying a cost function just within one domain as in these other models. Minor: In (5), what is the purpose of the -1 term in R_e? It seems to just subtract a constant 1 from the regularization term.
iclr_2018_H1A5ztj3b
In this paper, we show a phenomenon, which we named "super-convergence", where residual networks can be trained using an order of magnitude fewer iterations than is used with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with cyclical learning rates and a large maximum learning rate. Furthermore, we present evidence that training with large learning rates improves performance by regularizing the network. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. The architectures to replicate this work will be made available upon publication.
In this paper, the authors analyze training of residual networks using large cyclic learning rates (CLR). The authors demonstrate (a) fast convergence with cyclic learning rates and (b) evidence of large learning rates acting as regularization which improves performance on test sets – this is called “super-convergence”. However, both these effects are only shown on a specific dataset, architecture, learning algorithm and hyper parameter setting. Some specific comments by sections: 2. Related Work: This section loosely mentions other related works on SGD, topology of loss function and adaptive learning rates. The authors mention Loshchilov & Hutter in next section but do not compare it to their work. The authors do not discuss a somewhat contradictory claim from NIPS 2017 (as pointed out in the public comment): http://papers.nips.cc/paper/6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf 3. Super-convergence: This is a well explained section where the authors describe the LR range test and how it can be used to understand potential for super-convergence for any architecture. The authors also provide sufficient intuition for super-convergence. Since CLRs were already proposed by Smith (2015), the originality of this work would be specifically tied to their application to residual units. It would be interesting to see a qualitative analysis on how the residual error is impacting super-convergence. 4. Regularization: While Fig 4 demonstrates the regularization property, the reference to Fig 1a with better test error compared to typical training methods could simply be a result of slower convergence of typical training methods. 5. Optimal LRs: Fig.5b shows results for 1000 iterations whereas the text says 10000 (seems like a typo in scaling the plot). Figs 1 and 5 illustrate only one cycle (one increase and one decrease) of CLR. It would be interesting to see cases where more than one cycle is required and to see what happens when the LR increases the second time. 6. Experiments: This is a strong section where the authors show extensive reproducible experimentation to identify settings under which super-convergence works or does not work. However, the fact that the results only applies to CIFAR-10 dataset and could not be observed for ImageNet or other architectures is disappointing and heavily takes away from the significance of this work. Overall, the work is presented as a positive result in very specific conditions but it seems more like a negative result. It would be more appealing if the paper is presented as a negative result and strengthened by additional experimentation and theoretical backing.
iclr_2018_ByJWeR1AW
Modern deep artificial neural networks have achieved impressive results through models with very large capacity-compared to the number of training examplesthat control overfitting with the help of different forms of regularization. Regularization can be implicit, as is the case of stochastic gradient descent or parameter sharing in convolutional layers, or explicit. Most common explicit regularization techniques, such as dropout and weight decay, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have been proven successful in terms of results, they seem to waste capacity. In contrast, data augmentation techniques reduce the generalization error by increasing the number of training examples and without reducing the effective capacity. In this paper we systematically analyze the effect of data augmentation on some popular architectures and conclude that data augmentation alone-without any other explicit regularization techniques-can achieve the same performance or higher as regularized models, especially when training with fewer examples.
The paper proposes data augmentation as an alternative to commonly used regularisation techniques like weight decay and dropout, and shows for a few reference models / tasks that the same generalization performance can be achieved using only data augmentation. I think it's a great idea to investigate the effects of data augmentation more thoroughly. While it is a technique that is often used in literature, there hasn't really been any work that provides rigorous comparisons with alternative approaches and insights into its inner workings. Unfortunately I feel that this paper falls short of achieving this. Experiments are conducted on two fairly similar tasks (image classification on CIFAR-10 and CIFAR-100), with two different network architectures. This is a bit meager to be able to draw general conclusions about the properties of data augmentation. Given that this work tries to provide insight into an existing common practice, I think it is fair to expect a much stronger experimental section. In section 2.1.1 it is stated that this was a conscious choice because simplicity would lead to clearer conclusions, but I think the conclusions would be much more valuable if variety was the objective instead of simplicity, and if larger-scale tasks were also considered. Another concern is that the narrative of the paper pits augmentation against all other regularisation techniques, whereas more typically these will be used in conjunction. It is however very interesting that some of the results show that augmentation alone can sometimes be enough. I think extending the analysis to larger datasets such as ImageNet, as is suggested at the end of section 3, and probably also to different problems than image classification, is going to be essential to ensure that the conclusions drawn hold weight. Comments: - The distinction between "explicit" and "implicit" regularisation is never clearly enunciated. A bunch of examples are given for both, but I found it tricky to understand the difference from those. Initially I thought it reflected the intention behind the use of a given technique; i.e. weight decay is explicit because clearly regularisation is its primary purpose -- whereas batch normalisation is implicit because its regularisation properties are actually a side effect. However, the paper then goes on to treat data augmentation as distinct from other explicit regularisation techniques, so I guess this is not the intended meaning. Please clarify this, as the terms crop up quite often throughout the paper. I suspect that the distinction is somewhat arbitrary and not that meaningful. - In the abstract, it is already implied that data augmentation is superior to certain other regularisation techniques because it doesn't actually reduce the capacity of the model. But this ignores the fact that some of the model's excess capacity will be used to model out-of-distribution data (w.r.t. the original training distribution) instead. Data augmentation always modifies the distribution of the training data. I don't think it makes sense to imply that this is always preferable over reducing model capacity explicitly. This claim is referred to a few times throughout the work. - It could be more clearly stated that the reason for the regularising effect of batch normalisation is the noise in the batch estimates for mean and variance. - Some parts of the introduction could be removed because they are obvious, at least to an ICLR audience (like "the model would not be regularised if alpha (the regularisation parameter) equals 0"). - The experiments with smaller dataset sizes would be more interesting if smaller percentages were used. 50% / 80% / 100% are all on the same order of magnitude and this setting is not very realistic. In practice, when a dataset is "too small" to be able to train a network that solves a problem reliably, it will generally be one or more orders of magnitude too small, not 2x too small. - The choices of hyperparameters for "light" and "heavy" motivation seem somewhat arbitrary and are not well motivated. Some parameters which are sampled uniformly at random should be probably be sampled log-uniformly instead, because they represent scale factors. It should also be noted that much more extreme augmentation strategies have been used for this particular task in literature, in combination with padding (for example by Graham). It would be interesting to include this setting in the experiments as well. - On page 7 it is stated that "when combined with explicit regularization, the results are much worse than without it", but these results are omitted from the table. This is unfortunate because it is a very interesting observation, that runs counter to the common practice of combining all these regularisation techniques together (e.g. L2 + dropout + data augmentation is a common combination). Delving deeper into this could make the paper a lot stronger. - It is not entirely true that augmentation parameters depend only on the training data and not the architecture (last paragraph of section 2.4). Clearly more elaborate architectures benefit more from data augmentation, and might need heavier augmentation to perform optimally because they are more prone to overfitting (this is in fact stated earlier on in the paper as well). It is of course true that these hyperparameters tend to be much more robust to architecture changes than those of other regularisation techniques such as dropout and weight decay. This increased robustness is definitely useful and I think this is also adequately demonstrated in the experiments. - Phrases like "implicit regularization operates more effectively at capturing reality" are too vague to be meaningful. - Note that weight decay has also been found to have side effects related to optimization (e.g. in "Imagenet classification with deep convolutional neural networks", Krizhevsky et al.) REVISION: I applaud the effort the authors have put in to address many of my and the other reviewers' comments. I think they have done so adequately for the most part, so I've decided to raise the rating from 3 to 5, for what it's worth. The reason I have decided not to raise it beyond that, is that I still feel that for a paper like this, which studies an existing technique in detail, the experimental side needs to be significantly stronger. While ImageNet experiments may be a lot of work, some other (smaller) additional datasets would also have provided more interesting evidence. CIFAR-10 and CIFAR-100 are so similar that they may as well be considered variants of the same dataset, at least in the setting where they are used here. I do really appreciate the variety in the experiments in terms of network architectures, regularisation techniques, etc. but I think for real-world relevance, variety in problem settings (i.e. datasets) is simply much more important. I think it would be fine if additional experiments on other datasets were not varied along all these other axes, to cut down on the amount of work this would involve. But not including them at all unfortunately makes the results much less impactful.
iclr_2018_B1NOXfWR-
In order to develop a scalable multi-task reinforcement learning (RL) agent that is able to execute many complex tasks, this paper introduces a new RL problem where the agent is required to execute a given task graph which describes a set of subtasks and dependencies among them. Unlike existing approaches which explicitly describe what the agent should do, our problem only describes properties of subtasks and relationships between them, which requires the agent to perform a complex reasoning to find the optimal subtask to execute. To solve this problem, we propose a neural task graph solver (NTS) which encodes the task graph using a recursive neural network. To overcome the difficulty of training, we propose a novel non-parametric gradient-based policy that performs back-propagation over a differentiable form of the task graph to compute the influence of each subtask on the other subtasks. Our NTS is pre-trained to approximate the proposed gradientbased policy and fine-tuned through actor-critic method. The experimental results on a 2D visual domain show that our method to pre-train from the gradient-based policy significantly improves the performance of NTS. We also demonstrate that our agent can perform a complex reasoning to find the optimal way of executing the task graph and generalize well to unseen task graphs. In addition, we compare our agent with a Monte-Carlo Tree Search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of our agent can be further improved by combining with MCTS. The demo video is available at https://youtu.be/e_ZXVS5VutM.
This paper proposes to train recursive neural network on subtask graphs in order to execute a series of tasks in the right order, as is described by the subtask graph's dependencies. Each subtask execution is represented by a (non-learned) option. Reward shaping allows the proposed model to outperform simpler baselines, and experiments show the model generalizes to unseen graphs. While this paper is as far as I can tell novel in how it does what it does, the authors have failed to convey to me why this direction of research is relevant. - We know finding options is the hard part about options - We already have good algorithms that take subtask graphs and execute them in the right order from the planning litterature An interesting avenue would be if the subtask graphs were instead containing some level of uncertainty, or representing stochasticity, or anything that more traditional methods are unable to deal with efficiently, then I would see a justification for the use of neural networks. Alternatively, if the subtask graphs were learned instead of given, that would open the door to scaling an general learning. Yet, this is not discussed in the paper. Another interesting avenue would be to learn the options associated with each task, possibly using the information from the recursive neural networks to help learn these options. The proposed algorithm relies on fairly involved reward shaping, in that it is a very strong signal of supervision on what the next action should be. Additionaly, it's not clear why learning seems to completely "fail" without the pre-trained policy. The justification given is that it is "to address the difficulty of training due to the complex nature of the problem" but this is not really satisfying as the problems are not that hard. This also makes me question the generality of the approach since the pre-trained policy is rather simple while still providing an apparently strong score. In your experiments, you do not compare with any state-of-the-art RL or hierarchical RL algorithm on your domain, and use a new domain which has no previous point of reference. It it thus hard to properly evaluate your method against other proposed methods. What the authors propose is a simple idea, everything is very clearly explained, the experiments are somewhat lacking but at least show an improvement over more a naive approach, however, due to its simplicity, I do not think that this paper is relevant for the ICLR conference. Comments: - It is weird to use both a discount factor \gamma *and* a per-step penalty. While not disallowed by theory, doing both is redundant because they enforce the same mechanism. - It seems weird that the smoothed logical AND/OR functions do not depend on the number of inputs; that is unless there are always 3 inputs (but it is not explained why; logical functions are usually formalised as functions of 2 inputs) as suggested by Fig 3. - It does not seem clear how the whole training is actually performed (beyond the pre-training policy). The part about the actor-critic learning seems to lack many elements (whole architecture training? why is the policy a sum of "p^{cost}" and "p^{reward}"? is there a replay memory? How are the samples gathered?). (On the positive side, the appendix provides some interesting details on the tasks generations to understand the experiments.) - The experiments cover different settings with different task difficulties. However, only one type of tasks is used. It would be good to motivate (in addition to the paragraph in the intro) the cases where using the algorithm described in the paper may be (or not?) the only viable option and/or compare it to other algorithms. Even tough not mandatory, it would also be a clear good addition to also demonstrate more convincing experiments in a different setting. - "The episode length (time budget) was randomly set for each episode in a range such that 60% − 80% of subtasks are executed on average for both training and testing." --> this does not seem very precise: under what policy is the 60-80% defined? Is the time budget different for each new generated environment? - why wait until exactly 120 epochs for NTS-RProp before fine-tuning with actor-critic? It seems that much less would be sufficient from figure 4? - In the table 1 caption, it is written "same graph structure with training set" --> do you mean "same graph structure than the training set"?
iclr_2018_BJ8vJebC-
Published as a conference paper at ICLR 2018 SYNTHETIC AND NATURAL NOISE BOTH BREAK NEURAL MACHINE TRANSLATION Character-based neural machine translation (NMT) models alleviate out-ofvocabulary issues, learn morphology, and move us closer to completely end-toend translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.
This paper investigates the impact of character-level noise on various flavours of neural machine translation. It tests 4 different NMT systems with varying degrees and types of character awareness, including a novel meanChar system that uses averaged unigram character embeddings as word representations on the source side. The authors test these systems under a variety of noise conditions, including synthetic scrambling and keyboard replacements, as well as natural (human-made) errors found in other corpora and transplanted to the training and/or testing bitext via replacement tables. They show that all NMT systems, whether BPE or character-based, degrade drastically in quality in the presence of both synthetic and natural noise, and that it is possible to train a system to be resistant to these types of noise by including them in the training data. Unfortunately, they are not able to show any types of synthetic noise helping address natural noise. However, they are able to show that a system trained on a mixture of error types is able to perform adequately on all types of noise. This is a thorough exploration of a mostly under-studied problem. The paper is well-written and easy to follow. The authors do a good job of positioning their study with respect to related work on black-box adversarial techniques, but overall, by working on the topic of noisy input data at all, they are guaranteed novelty. The inclusion of so many character-based systems is very nice, but it is the inclusion of natural sources of noise that really makes the paper work. Their transplanting of errors from other corpora is a good solution to the problem, and one likely to be built upon by others. In terms of negatives, it feels like this work is just starting to scratch the surface of noise in NMT. The proposed meanChar architecture doesn’t look like a particularly good approach to producing noise-resistant translation systems, and the alternative solution of training on data where noise has been introduced through replacement tables isn’t extremely satisfying. Furthermore, the use of these replacement tables means that even when the noise is natural, it’s still kind of artificial. Finally, this paper doesn’t seem to be a perfect fit for ICLR, as it is mostly experimental with few technical contributions that are likely to be impactful; it feels like it might be more at home and have greater impact in a *ACL conference. Regarding the artificialness of their natural noise - obviously the only solution here is to find genuinely noisy parallel data, but even granting that such a resource does not yet exist, what is described here feels unnaturally artificial. First of all, errors learned from the noisy data sources are constrained to exist within a word. This tilts the comparison in favour of architectures that retain word boundaries (such as the charCNN system here), while those systems may struggle with other sources of errors such as missing spaces between words. Second, if I understand correctly, once an error is learned from the noisy data, it is applied uniformly and consistently throughout the training and/or test data. This seems worse than estimating the frequency of the error and applying them stochastically (or trying to learn when an error is likely to occur). I feel like these issues should at least be mentioned in the paper, so it is clear to the reader that there is work left to be done in evaluating the system on truly natural noise. Also, it is somewhat jarring that only the charCNN approach is included in the experiments with noisy training data (Table 6). I realize that this is likely due to computational or time constraints, but it is worth providing some explanation in the text for why the experiments were conducted in this manner. On a related note, the line in the abstract stating that “... a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise” implies that the other (non-charCNN) architectures could not learn these representations, when in reality, they simply weren’t given the chance. Section 7.2 on the richness of natural noise is extremely interesting, but maybe less so to an ICLR audience. From my perspective, it would be interesting to see that section expanded, or used as the basis for future work on improve architectures or training strategies. I have only one small, specific suggestion: at the end of Section 3, consider deleting the last paragraph break, so there is one paragraph for each system (charCNN currently has two paragraphs). [edited for typos]
iclr_2018_HJNMYceCW
Published as a conference paper at ICLR 2018 RESIDUAL LOSS PREDICTION: REINFORCEMENT LEARNING WITH NO INCREMENTAL FEEDBACK We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode. We introduce a novel algorithm, RESIDUAL LOSS PREDICTION (RESLOPE), that solves such problems by automatically learning an internal representation of a denser reward function. RESLOPE operates as a reduction to contextual bandits, using its learned loss representation to solve the credit assignment problem, and a contextual bandit oracle to trade-off exploration and exploitation. RESLOPE enjoys a no-regret reductionstyle theoretical guarantee and outperforms state of the art reinforcement learning algorithms in both MDP environments and bandit structured prediction settings.
After reading the other reviews and the authors' responses, I am satisfied that this paper is above the accept threshold. I think there are many areas of further discussion that the authors can flesh out (as mentioned below and in other reviews), but overall the contribution seems solid. I also appreciate the reviewers' efforts to run more experiments and flesh out the discussion in the revised version of the submission. Final concluding thoughts: -- Perhaps pi^ref was somehow better for the structured prediction problems than RL problems? -- Can one show regret bound for multi-deviation if one doesn't have to learn x (i.e., we were given a good x a priori)? --------------------------------------------- ORIGINAL REVIEW First off, I think this paper is potentially above the accept threshold. The ideas presented are interesting and the results are potentially interesting as well. However, I have some reservations, a significant portion of which stem from not understanding aspects of the proposed approach and theoretical results, as outlined below. The algorithm design and theoretical results in the appendix could be made substantially more rigorous. Specifically: -- basic notations such as regret (in Theorem 1), the total reward (J), Q-value (Q), and value function (V) are not defined. While these concepts are fairly standard, it would be highly beneficial to define them formally. -- I'm not convinced that the "terms in the parentheses" (Eq. 7) are "exactly the contextual bandit cost". I would like to see a more rigorous derivation of the connection. For instance, one could imagine that the policy disadvantage should be the difference between the residual costs of the bandit algorithm and the reference policy, rather than just the residual cost of the bandit algorithm. -- I'm a little unclear in the proof of Theorem 1 where Q(s,pi_n) from Eq 7 fits into Eq 8. -- The residual cost used for CB.update depends on estimated costs at other time steps h!=h_dev. Presumably, these estimated costs will change as learning progresses. How does one reconcile that? I imagine that it could somehow work out using a bandit algorithm with adversarial guarantees, but I can also imagine it not working out. I would like to see a rigorous treatment of this issue. -- It would be nice to see an end-to-end result that instantiates Theorem 1 (and/or Theorem 2) with a contextual bandit algorithm to see a fully instantiated guarantee. With regards to the algorithm design itself, I have some confusions: -- How does one create x in practice? I believe this is described in Appendix H, but it's not obvious. -- What happens if we don't have a good way to generate x and it must be learned as well? I'd imagine one would need larger RNNs in that case. -- If x is actually learned on-the-fly, how does that impact the theoretical results? -- I find it curious that there's no notion of future reward learning in the learning algorithm. For instance, in Q learning, one directly models the the long-term (discounted) rewards during learning. In fact, the theoretical analysis talks about advantage functions as well. It would be nice to comment on this aspect at an intuitive level. With regards to the experiments: -- I find it very curious that the results are so negative for using only 1-dev compared to multi-dev (Figure 9 in Appendix). Given that much of the paper is devoted to 1-dev, it's a bit disappointing that this issue is not analyzed in more detail, and furthermore the results are mostly hidden in the appendix. -- It's not clear if a reference policy was used in the experiments and what value of beta was used. -- Can the authors speculate about the difference in performance between the RL and bandit structured prediction settings? My personal conjecture is that the bandit structured prediction settings are more easily decomposable additively, which leads to a greater advantage of the proposed approach, but I would like to hear the authors' thoughts. Finally, the overall presentation of this paper could be substantially improved. In addition to the above uncertainties, some more points are described below. I don't view these points as "deal breakers" for determining accept/reject. -- This paper uses too many examples, from part-of-speech tagging to credit assignment in determining paths. I recommend sticking to one running example, which substantially reduces context switching for the reader. In every such example, there are extraneous details are not relevant to making the point, and the reader needs to spend considerable effort figuring that out for each example used. -- Inconsistent language. For instance, x is sometimes referred to as the "input example", "context" and "features". -- At the end of page 4, "Internally, ReslopePolicy takes a standard learning to search step." Two issues: 1) ReslopePolicy is not defined or referred to anywhere else. 2) is the remainder of that paragraph a description of a "standard learning to search step"? -- As mentioned before, Regret is not defined in Theorem 1 & 2. -- The discussion of the high-level algorithmic concepts is a bit diffuse or lacking. For instance, one key idea in the algorithmic development is that it's sufficient to make a uniformly random deviation. Is this idea from the learning to search literature? If so, it would be nice to highlight this in Section 2.2.
iclr_2018_HkmaTz-0W
Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture effects the loss landscape, and how training parameters affect the shape of minimizers.
* In the "flat vs sharp" dilemma, the experiments display that the dilemma, if any, is subtle. Table 1 does not necessarily contradict this view. It would be a good idea to put the test results directly on Fig. 4 as it does not ease reading currently (and postpone ResNet-56 in the appendix). How was Figure 5 computed ? It is said that *a* random direction was used from each minimiser to plot the loss, so how the 2D directions obtained ? * On the convexity vs non-convexity (Sec. 6), it is interesting to see how pushing the Id through the net changes the look of the loss for deep nets. The difference VGG - ResNets is also interesting, but it would have been interesting to see how this affects the current state of the art in understanding deep learning, something that was done for the "flat vs sharp" dilemma, but is lacking here. For example, does this observation that the local curvature of the loss around minima is different for ResNets and VGG allows to interpret the difference in their performances ? * On optimisation paths, the choice of PCA directions is wise compared to random projections, and results are nice as plotted. There is however a phenomenon I would have liked to be discussed, the fact that the leading eigenvector captures so much variability, which perhaps signals that optimisation happens in a very low dimensional subspace for the experiments carried, and could be useful for optimisation algorithms (you trade dimension d for a much smaller "effective" d', you only have to figure out a generating system for this subspace and carry out optimisation inside). Can this be related to the "flat vs sharp" dilemma ? I would suppose that flatness tends to increase the variability captured by leading eigenvectors ? Typoes: Legend of Figure 2: red lines are error -> red lines are accuracy Table 1: test accuracy -> test error Before 6.2: architecture effects -> architecture affects
iclr_2018_Sy_MK3lAZ
Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either discrete or continuous space. Motivated by the project of design Game AI for King of Glory (KOG), one the world's most popular mobile game, we consider the scenario with the discrete-continuous hybrid action space. To directly apply existing DLR frameworks, existing approaches either approximate the hybrid space by a discrete set or relaxing it into a continuous set, which is usually less efficient and robust. In this paper, we propose a parametrized deep Q-network (P-DQN) farmework for the hybrid action space without approximation or relaxation. Our algorithm combines DQN and DDPG and can be viewed as an extension of the DQN to hybrid actions. The empirical study on the game KOG validates the efficiency and effectiveness of our method.
This paper presents a new reinforcement learning approach to handle environments with a mix of discrete and continuous action spaces. The authors propose a parameterized deep Q-network (P-DQN) and leverage learning schemes from existing algorithms such as DQN and DDPG to train the network. The proposed loss function and alternating optimization of the parameters are pretty intuitive and easy to follow. My main concern is with lack of sufficient depth in empirical evaluation and analysis of the method. Pros: 1. The setup is an interesting and practically useful one to investigate. Many real-world environments require individual actions that are further parameterized over a continuous space. 2. The proposed method is simple and intuitive. Cons: 1. The evaluation is performed only on a single environment in a restricted fashion. I understand the authors are restricted in the choice of environments which require a hybrid action space. However, even domains like Atari could be used in a setting where the continuous parameter x_k refers to the number of repetitions for action k. This is similar to the work of Lakshminarayanan et al. (2017). Could you test your algorithm in such a setting? 2. Comparison of the algorithm is performed only against DDPG. Have you tried other options like PPO (Schulman et al., 2017)? Also, considering that the action space is simplified in the experimental setup ("we use the default parameters of skills provided by the game environment, usually pointing to the opponent hero's location"), with only the move(\alpha) action being a hybrid, one could imagine discretizing the move direction \alpha and training a DQN (or any other algorithms over discrete action spaces) as another baseline. 3. The reward structure seems to be highly engineered. With so many components in the reward, it is not clear what the individual contributions are and what policies are actually learned. 4. The authors don't provide any analysis of the empirical results. Do the P-DQN and DDPG converge to the same policy? What factor(s) contribute most to the faster learning of P-DQN? Do the values of \alpha and \beta for the two-timescale updates affect the results considerably? 5. (minor) The writing contains a lot of grammatical errors which makes this draft below par for an ICLR paper. Other Questions: 1. In eq. 5.3, the loss over \theta is defined as the sum of Q values over different k. Did you try other formulations of the loss? (say, product of the Q values for instance) One potential issue with the sum could be that if some values of k dominate this sum, Q(s, k, x_k; w) might not be maximized for all k. 2. Some terms of the reward function seem to be overly dependent on historic actions (ex. difference in gold and hitpoints). This could swamp the influence of the other terms which are more dependent on the current action a_t, which might be an issue, especially with the Markovian assumption? References: Lakshminarayanan et al, 2017; Dynamic Action Repetition for Deep Reinforcement Learning; AAAI Schulman et al., 2017; Proximal Policy Optimization Algorithms; Arxiv
iclr_2018_rkO3uTkAZ
Published as a conference paper at ICLR 2018 MEMORIZATION PRECEDES GENERATION: LEARNING UNSUPERVISED GANS WITH MEMORY NETWORKS We propose an approach to address two issues that commonly occur during training of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of data, they often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurring instability during adversarial training. We argue that these two infamous problems of unsupervised GAN training can be largely alleviated by a learnable memory network to which both generators and discriminators can access. Generators can effectively learn representation of training samples to understand underlying cluster distributions of data, which ease the structure discontinuity problem. At the same time, discriminators can better memorize clusters of previously generated samples, which mitigate the forgetting problem. We propose a novel end-to-end GAN model named memoryGAN, which involves a memory network that is unsupervisedly trainable and integrable to many existing GAN models. With evaluations on multiple datasets such as Fashion-MNIST, CelebA, CIFAR10, and Chairs, we show that our model is probabilistically interpretable, and generates realistic image samples of high visual fidelity. The memoryGAN also achieves the state-of-the-art inception scores over unsupervised GAN models on the CIFAR10 dataset, without any optimization tricks and weaker divergences.
[Overview] In this paper, the authors proposed a novel model called MemoryGAN, which integrates memory network with GAN. As claimed by the authors, MemoryGAN is aimed at addressing two problems of GAN training: 1) difficult to model the structural discontinuity between disparate classes in the latent space; 2) catastrophic forgetting problem during the training of discriminator about the past synthesized samples by the generator. It exploits the life-long memory network and adapts it to GAN. It consists of two parts, discriminative memory network (DMN) and Memory Conditional Generative Network (MCGN). DMN is used for discriminating input samples by integrating the memory learnt in the memory network, and MCGN is used for generating images based on random vector and the sampled memory from the memory network. In the experiments, the authors evaluated memoryGAN on three datasets, CIFAR-10, affine-MNIST and Fashion-MNIST, and demonstrated the superiority to previous models. Through ablation study, the authors further showed the effects of separate components in memoryGAN. [Strengths] 1. This paper is well-written. All modules in the proposed model and the experiments were explained clearly. I enjoyed much to read the paper. 2. The paper presents a novel method called MemoryGAN for GAN training. To address the two infamous problems mentioned in the paper, the authors proposed to integrate a memory network into GAN. Through memory network, MemoryGAN can explicitly learn the data distribution of real images and fake images. I think this is a very promising and meaningful extension to the original GAN. 3. With MemoryGAN, the authors achieved best Inception Score on CIFAR-10. By ablation study, the authors demonstrated each part of the model helps to improve the final performance. [Comments] My comments are mainly about the experiment part: 1. In Table 2, the authors show the Inception Score of images generated by DCGAN at the last row. On CIFAR-10, it is ~5.35. As the authors mentioned, removing EM, MCGCN and Memory will result in a conventional DCGAN. However, as far as I know, DCGAN could achieve > 6.5 Inception Score in general. I am wondering what makes such a big difference between the reported numbers in this paper and other papers? 2. In the experiments, the authors set N = 16,384, and M = 512, and z is with dimension 16. I did not understand why the memory size is such large. Take CIFAR-10 as the example, its training set contains 50k images. Using such a large memory size, each memory slot will merely count for several samples. Is a large memory size necessary to make MemoryGAN work? If not, the authors should also show ablated study on the effect of different memory size; If it is true, please explain why is that. Also, the authors should mention the training time compared with DCGAN. Updating memory with such a large size seems very time-consuming. 3. Still on the memory size in this model. I am curious about the results if the size is decreased to the same or comparable number of image categories in the training set. As the author claimed, if the memory network could learn to cluster training data into different category, we should be able to see some interesting results by sampling the keys and generate categoric images. 4. The paper should be compared with InfoGAN (Chen et al. 2016), and the authors should explain the differences between two models in the related work. Similar to MemoryGAN, InfoGAN also did not need any data annotations, but could learn the latent code flexibly. [Summary] This paper proposed a new model called MemoryGAN for image generation. It combined memory network with GAN, and achieved state-of-art performance on CIFAR-10. The arguments that MemoryGAN could solve the two infamous problem make sense. As I mentioned above, I did not understand why the authors used such large memory size. More explanations and experiments should be conducted to justify this setting. Overall, I think MemoryGAN opened a new direction of GAN and worth to further explore.
iclr_2018_ByQpn1ZA-
Published as a conference paper at ICLR 2018 MANY PATHS TO EQUILIBRIUM: GANS DO NOT NEED TO DECREASE A DIVERGENCE AT EVERY STEP Generative adversarial networks (GANs) are a family of generative models that do not minimize a single training criterion. Unlike other generative models, the data distribution is learned via a game between a generator (the generative model) and a discriminator (a teacher providing training signal) that each minimize their own cost. GANs are designed to reach a Nash equilibrium at which each player cannot reduce their cost without changing the other players' parameters. One useful approach for the theory of GANs is to show that a divergence between the training distribution and the model distribution obtains its minimum value at equilibrium. Several recent research directions have been motivated by the idea that this divergence is the primary guide for the learning process and that every step of learning should decrease the divergence. We show that this view is overly restrictive. During GAN training, the discriminator provides learning signal in situations where the gradients of the divergences between distributions would not be useful. We provide empirical counterexamples to the view of GAN training as divergence minimization. Specifically, we demonstrate that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail. We also show that gradient penalties motivated from the divergence minimization perspective are equally helpful when applied in other contexts in which the divergence minimization perspective does not predict they would be helpful. This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.
This paper answers recent critiques about ``standard GAN'' that were recently formulated to motivate variants based on other losses, in particular using ideas from optimal transport. It makes main points 1) ``standard GAN'' is an ill-defined term that may refer to two different learning criteria, with different properties 2) though the non-saturating variant (see Eq. 3) of ``standard GAN'' may converge towards a minimum of the Jensen-Shannon divergence, it does not mean that the minimization process follows gradients of the Jensen-Shannon divergence (and conversely, following gradient paths of the Jensen-Shannon divergence may not converge towards a minimum, but this was rather the point of the previous critiques about ``standard GAN''). 3) the penalization strategies introduced for ``non-standard GAN'' with specific motivations, may also apply successfully to the ``standard GAN'', improving robustness, thereby helping to set hyperparameters. Note that item 2) is relevant in many other setups in the deep learning framework and is often overlooked. Overall, I believe that the paper provides enough material to substantiate these claims, even if the message could be better delivered. In particular, the writing is sometimes ambiguous (e.g. in Section 2.3, the reader who did not follow the recent developments on the subject on arXiv will have difficulties to rebuild the cross-references between authors, acronyms and formulae). The answers to the critiques referenced in the paper are convincing, though I must admit that I don't know how crucial it is to answer these critics, since it is difficult to assess wether they reached or will reach a large audience. Details: - p. 4 please do not qualify KL as a distance metric - Section 4.3: "Every GAN variant was trained for 200000 iterations, and 5 discriminator updates were done for each generator update" is ambiguous: what is exactly meant by "iteration" (and sometimes step elsewhere)? - Section 4.3: the performance measure is not relevant regarding distributions. The l2 distance is somewhat OK for means, but it makes little sense for covariance matrices.
iclr_2018_rkc_hGb0Z
We present a method for evaluating the sensitivity of deep reinforcement learning (RL) policies. We also formulate a zero-sum dynamic game for designing robust deep reinforcement learning policies. Our approach mitigates the brittleness of policies when agents are trained in a simulated environment and are later exposed to the real world where it is hazardous to employ RL policies. This framework for training deep RL policies involve a zero-sum dynamic game against an adversarial agent, where the goal is to drive the system dynamics to a saddle region. Using a variant of the guided policy search algorithm, our agent learns to adopt robust policies that require less samples for learning the dynamics and performs better than the GPS algorithm. Without loss of generality, we demonstrate that deep RL policies trained in this fashion will be maximally robust to a "worst" possible adversarial disturbances.
There are two anonymity violations in the paper. The first is in the sentence "The 7-DoF robot result presented shortly previously appeared in our abstract that introduced robust GPS [21]". The second is in the first linked video, which links to a non-anonymized youtube video. The second linked video, a dropbox link, does not have the correct permissions set, and thus cannot be viewed. Also, the citation style does not seem to follow the ICLR style guidelines. Disregarding the anonymity and style violations, I will review the paper. I do not have background in H_inf control theory, but I will review the paper to the best of my ability. This paper proposes a guided policy search method for training deep neural network policies that are robust to worst-case additive disturbances. To my knowledge, the approach presented in the paper is novel, though some relevant references are missing. The experimental results demonstrate the method on two simulated experimental domains, demonstrating robustness to adversarial perturbations. The paper is generally well-written, but has some bugs and typos. The paper is substantially longer than the strongly suggested page limit. There are parts of the paper that should be moved to an appendix to accommodate the page limit. Regarding the experiments: My main concerns are with regard to the completeness of the experiments. First, the experimental results report performance in terms of cost/reward, which is extremely difficult to interpret. It would be helpful to also provide success rate for all experiments, where the authors can define success as, e.g. getting the peg in the hole or being within a certain threshold of the goal. Second, the paper should provide a comparison of policy robustness between the proposed approach and (1) a policy trained with standard GPS, (2) a policy trained with GPS and random perturbations, and ideally, (3) prior approaches to robustness, e.g. Pinto et al., Madelkar et al. [1], or Rajeswaran et al. [2]. Regarding related work and clarity: There are a few papers that consider the problem of building deep neural network policies that are robust [1,2] that should be discussed and ideally compared to. Recent deep reinforcement learning work has studied the problem of robustness to adversarial perturbations in the observation space, e.g. [3,4,5,6]. As such, it would be helpful to clarify in the introduction that this paper is considering additive perturbations in the action space. The paper switches between using rewards and costs. It would be helpful to pick one term and stick with it for the entire paper, rather than switching. Further, it seems like there are errors due to the switching. e.g. on page 3, \ell is defined as the expected reward and in equation 3, it seems like the protaganist policy is trying to minimize \ell, contradicting the earlier definition. Lastly, section 5.1 is currently rather difficult to follow. It would help to see more top-down direction in the derivation and more details in section 5.1, 5.2, and 5.3 to be moved to an appendix. Regarding correctness: There seem to be some issues in the math and/or notation: The most major issue is in Algorithm 2, which is probably the most important part of the paper to be correct, given that it provides a complete picture of the algorithm. I believe that steps 3 and 4 are incorrect and/or incomplete. Step 4 should be referring to the local policy p rather than the global policy pi (assuming the notation in sections 5.1 and 5.3). Also, the local policy p(v|x) appears nowhere in the algorithm, and probably should appear in step 4. In step 5, how is this regression different from step 7? In equation 1 and elsewhere in section 4, there is a mix of notation, using u and z inter-changably. (e.g. in equation 1 and the following equation, I believe the u should be switched to z or the z should be switched to u). Minor feedback: > "requires fewer samples to generalize to the real world" None of these experiments are in the real world, so the term "real world" should not be used. > "algoruithm" -> "algorithm" > "when unexpected such as when there exists" - bad grammar / typo > "ertwhile sensy-sensitivity parameter" - typo - reference 1 is missing authors In summary, I think the paper would be significantly improved with further experimental comparisons, further discussion of related work, and clarifications/corrections on the notation, equations, and algorithms. In its current form, I don't think the paper is fit for publication. My rating is between a 4 and a 5. [1] http://vision.stanford.edu/pdf/mandlekar2017iros.pdf [2] https://arxiv.org/abs/1610.01283 [2] https://arxiv.org/abs/1701.04143 [3] https://arxiv.org/abs/1702.02284 [4] https://www.ijcai.org/proceedings/2017/0525.pdf [5] https://arxiv.org/abs/1705.06452
iclr_2018_BkrsAzWAb
ONLINE LEARNING RATE ADAPTATION WITH HYPERGRADIENT DESCENT We introduce a general method for improving the convergence rate of gradientbased optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.
SUMMARY: The authors reinvent a 20 years old technique for adapting a global or component-wise learning rate for gradient descent. The technique can be derived as a gradient step for the learning rate hyperparameter, or it can be understood as a simple and efficient adaptation technique. GENERAL IMPRESSION: One central problem of the paper is missing novelty. The authors are well aware of this. They still manage to provide added value. Despite its limited novelty, this is a very interesting and potentially impactful paper. I like in particular the detailed discussion of related work, which includes some frequently overlooked precursors of modern methods. CRITICISM: The experimental evaluation is rather solid, but not perfect. It considers three different problems: logistic regression (a convex problem), and dense as well as convolutional networks. That's a solid spectrum. However, it is not clear why the method is tested only on a single data set: MNIST. Since it is entirely general, I would rather expect a test on a dozen different data sets. That would also tell us more about a possible sensitivity w.r.t. the hyperparameters \alpha_0 and \beta. The extensions in section 5 don't seem to be very useful. In particular, I cannot get rid of the impression that section 5.1 exists for the sole purpose of introducing a convergence theorem. Analyzing the actual adaptive algorithm would be very interesting. In contrast, the present result is trivial and of no interest at all, since it requires knowing a good parameter setting, which defeats a large part of the value of the method. MINOR POINTS: page 4, bottom: use \citep for Duchi et al. (2011). None of the figures is legible on a grayscale printout of the paper. Please do not use color as the only cue to identify a curve. In figure 2, top row, please display the learning rate on a log scale. page 8, line 7 in section 4.3: "the the" (unintended repetition) End of section 4: an increase from 0.001 to 0.001002 is hardly worth reporting - or am I missing something?
iclr_2018_rkHVZWZAZ
THE REACTOR: A FAST AND SAMPLE-EFFICIENT ACTOR-CRITIC AGENT FOR REINFORCEMENT LEARNING In this work, we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2017) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms, designed for expected value evaluation, into distributional algorithms. Next, we introduce the β-leave-one-out policy gradient algorithm, which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
This paper proposes a novel reinforcement learning algorithm (« The Reactor ») based on the combination of several improvements to DQN: a distributional version of Retrace, a policy gradient update rule called beta-LOO aiming at variance reduction, a variant of prioritized experience replay for sequences, and a parallel training architecture. Experiments on Atari games show a significant improvement over prioritized dueling networks in particular, and competitive performance compared to Rainbow, at a fraction of the training time. There are definitely several interesting and meaningful contributions in this submission, and I like the motivations behind them. They are not groundbreaking (essentially extending existing techniques) but are still very relevant to current RL research. Unfortunately I also see it as a step back in terms of comparison to other algorithms. The recent Rainbow paper finally established a long overdue clear benchmark on Atari. We have seen with the « Deep Reinforcement Learning that Matters » paper how important (and difficult) it is to properly compare algorithms on deep RL problems. I assume that this submission was mostly written before Rainbow came out, and that comparisons to Rainbow were hastily added just before the ICLR deadline: this would explain why they are quite limited, but in my opinion it remains a major issue, which is the main reason why I am advocating for rejection. More precisely, focusing on the comparison to Rainbow which is the main competitor here, my concerns are the following: - There is almost no discussion on the differences between Reactor and Rainbow (actually the paper lacks a « related work » section). In particular Rainbow also uses a version of distributional multi-step, which as far as I can tell may not be as well motivated (from a mathematical point of view) as the one in this submission (since it does not correct for the « off-policyness » of the replay data), but still seems to work well on Atari. - Rainbow is not distributed. This was a deliberate choice by its authors to focus on algorithmic comparisons. However, it seems to me that it could benefit from a parallel training scheme like Reactor’s. I believe a comparison between Reactor and Rainbow needs to either have them both parallelized or none of them (especially for a comparison on time efficiency like in Fig. 2) - Rainbow uses the traditional feedforward DQN architecture while Reactor uses a recurrent network. It is not clear to which extent this has an impact on the results. - Rainbow was stopped at 200M steps, at which point it seems to be overall superior to Reactor at 200M steps. The results as presented here emphasize the superiority of Reactor at 500M steps, but a proper comparison would require Rainbow results at 500M steps as well. In addition, although I found most of the paper to be clear enough, some parts were confusing to me, in particular: - « multi-step distributional Bellman operator » in 3.2: not clear exactly what the target distribution is. If I understand correctly this is the same as the Rainbow extension, but this link is not mentioned. - 3.4.1 (network architecture): a simple diagram in the appendix would make it much easier to understand (Table 3 is still hard to read because it is not clear which layers are connected together) - 3.3 (prioritized sequence replay): again a visual illustration of the partitioning scheme would in my opinion help clarify the approach A few minor points to conclude: - In eq. 6, 7 and the rest of this section, A does not depend (directly) on theta so it should probably be removed to avoid confusion. Note also that using the letter A may not be best since A is used to denote an action in 3.1. - In 3.1: « Let us assume that for the chosen action A we have access to an estimate R(A) of Qπ(A) » => « unbiased estimate » - In last equation of p.5 it is not clear what q_i^n is - There is a lambda missing on p.6 in the equation showing that alphas are non-negative on average, just before the min - In the equation above eq. 12 there is a sum over « i=1 » - That same equation ends with some h_z_i that are not defined - In Fig. 2 (left) for Reactor we see one worker using large batches and another one using many threads. This is confusing. - 3.3 mentions sequences of length 32 but 3.4 says length 33. - 3.3 says tree operations are in O(n ln(n)) but it should be O(ln(n)) - At very end of 3.3 it is not clear what « total variation » is. - In 3.4 please specify the frequency at which the learner thread downloads shared parameters and uploads updates - Caption of Fig. 3 talks about « changing the number of workers » for the left plot while it is in the right plot - The explanation on what the variants of Reactor (ND and 500M) mean comes after results are shown in Fig. 2. - Section 4 starts with Fig. 3 without explaining what the task is, how performance is measured, etc. It also claims that Distributional Retrace helps while this is not the case in Fig. 3 (I realize it is explained afterwards, but it is confusing when reading the sentence « We can also see... »). Finally it says priorization is the most important component while the beta-LOO ablation seems to perform just the same. - Footnote 3 should say it is 200M observations except for Reactor 500M - End of 4.1: « The algorithms that we compare Reactor against are » => missing ACER, A3C and Rainbow - There are two references for « Sample efficient actor-critic with experience replay » - I do not see the added benefit of the Elo computation. It seems to convey essentially the same information as average rank. And a few typos: - Just above 2.1.3: « increasing » => increasingly - In 3.1: « where V is a baseline that depend » => depends - p.7: « hight » => high, and « to all other sequences » => of all other sequences - Double parentheses in Bellemare citation at beginning of section 4 - Several typos in appendix (too many to list) Note: I did not have time to carefully read Appendix 6.3 (contextual priority tree) Edit after revision: bumped score from 5 to 7 because (1) authors did many improvements to the paper, and (2) their explanations shed light on some of my concerns
iclr_2018_rkN2Il-RZ
SCAN: LEARNING HIERARCHICAL COMPOSITIONAL VISUAL CONCEPTS The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts.
Summary --- This paper proposes a new model called SCAN (Symbol-Concept Association Network) for hierarchical concept learning. It trains one VAE on images then another one on symbols and aligns their latent spaces. This allows for symbol2image and image2symbol inference. But it also allows for generalization to new concepts composed from existing concepts using logical operators. Experiments show that SCAN generates images which correspond to provided concept labels and span the space of concepts which match these labels. The model starts with a beta-VAE trained on images (x) from the relevant domain (in this case, simple scenes generated from DeepMind Lab which vary across a few known dimensions). This is complemented by the SCAN model, which is a beta-VAE trained to reconstruct symbols (y; k-hot encoded concepts like {red, suitcase}) with a slightly modified objective. SCAN optimizes the ELBO plus a KL term which pushes the latent distribution of the y VAE toward the latent distribution of the x (image) VAE. This aligns the latent representations so now a symbol can be encoded into a latent distribution z and decoded as an image. One nice property of the learned latent representation is that more specific concepts have more specific latent representations. Consider latent distributions z1 and z2 for a more general symbol {red} and a more specific symbol {red, suitcase}. Fewer dimensions of z2 have high variance than dimensions of z1. For example, the latent space could encode red and suitcase in two dimensions (as binary attributes). z1 would have high variance on all dimensions but the one which encodes red and z2 would have high variance on all dimensions but red and suitcase. In the reported experiments some of the dimensions do seem to be interpretable attributes (figure 5 right). SCAN also pays particular attention to hierarchical concepts. Another very simple model (1d convolution layer) is learned to mimic logical operators. Normally a SCAN encoder takes {red} as input and the decoder reconstructs {red}. Now another model is trained that takes "{red} AND {suitcase}" as input and reconstructs {red, suitcase}. The two input concepts {red} and {suitcase} are each encoded by a pre-trained SCAN encoder and then those two distributions are combined into one by a simple 1d convolution module trained to implement the AND operator (or IGNORE/IN COMMON). This allows images of concepts like {small, red, suitcase} to be generated even if small red suitcases are not in the training data. Experiments provide some basic verification and analysis of the method: 1) Qualitatively, concept samples are correct and diverse, generating images with all configurations of attributes not specified by the input concept. 2) As SCAN sees more diverse examples of a concept (e.g. suitcases of all colors instead of just red ones) it starts to generate more diverse image samples of that concept. 3) SCAN samples/representations are more accurate (generate images of the right concept) and more diverse (far from a uniform prior in a KL sense) than JMVAE and TELBO baselines. 4) SCAN is also compared to SCAN_U, which uses an image beta-VAE that learned an entangled (Unstructured) representation. SCAN_U performed worse than SCAN and baselines. 5) Concepts expressed as logical combinations of other concepts generalize well for both the SCAN representation and the baseline representations. Strengths --- The idea of concept learning considered here is novel and satisfying. It imposing logical, hierarchical structure on latent representations in a general way. This suggests opportunities for inserting prior information and adds interpretability to the latent space. Weaknesses --- I think this paper is missing some important evaluation. Role/Nature of Disentangled Features not Clear (major): * Disentangled features seem to be very important for SCAN to work well (SCAN vs SCAN_U). It seems that the only difference between the unstructured (entangled) and the structured (disentangled) visual VAE is the color space of the input (RGB vs HSV). If so, this should be stated more clearly in the main paper. What role did beta-VAE (tuning beta) as opposed to plain VAE play in learning disentangled features? * What color space was used for the JMVAE and TELBO baselines? Training these with HSV seems especially important for establishing a good comparison, but it would be good to report results for HSV and RGB for all models. * How specific is the HSV trick to this domain? Would it matter for natural images? * How would a latent representation learned via supervision perform? (Maybe explicitly align dimensions of z to red/suitcase/small with supervision through some mechanism. c.f. "Discovering Hidden Factors of Variation in Deep Networks" by Cheung et al.) Evaluation of sample complexity (major): * One of the main benefits of SCAN is that it works with less training data. There should be a more systematic evaluation of this claim. In particular, I would like to see a Number of Examples vs Performance (Accuracy/Diversity) plot for both SCAN and the baselines. Minor questions/comments/concerns: * What do the logical operators learn that the hand-specified versions do not? * Does training SCAN with the structure provided by the logical operators lead to improved performance? * There seems to be a mistake in figure 5 unless I interpreted it incorrectly. The right side doesn't match the left side. During the middle stage of training object hues vary on the left, but floor color becomes less specific on the right. Shouldn't object color become less specific? Prelimary Evaluation --- This clear and well written paper describes an interesting and novel way of learning a model of hierarchical concepts. It's missing some evaluation that would help establish the sample complexity benefit more precisely (a claimed contribution) and add important details about unsupervised disentangled representations. I would happy to increase my rating if these are addressed.
iclr_2018_By3v9k-RZ
Deep neural networks (DNNs) had great success on NLP tasks such as language modeling, machine translation and certain question answering (QA) tasks. However, the success is limited at more knowledge intensive tasks such as QA from a big corpus. Existing end-to-end deep QA models (Miller et al., 2016;Weston et al., 2014) need to read the entire text after observing the question, and therefore their complexity in responding a question is linear in the text size. This is prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web. We propose to solve this scalability issue by using symbolic meaning representations, which can be indexed and retrieved efficiently with complexity that is independent of the text size. More specifically, we use sequence-to-sequence models to encode knowledge symbolically and generate programs to answer questions from the encoded knowledge. We apply our approach, called the N-Gram Machine (NGM), to the bAbI tasks and a special version of them ("life-long bAbI") which has stories of up to 10 million sentences. Our experiments show that NGM can successfully solve both of these tasks accurately and efficiently. Unlike fully differentiable memory models, NGM's time complexity and answering quality are not affected by the story length. The whole system of NGM is trained end-to-end with REINFORCE (Williams, 1992). To avoid high variance in gradient estimation, which is typical in discrete latent variable models, we use beam search instead of sampling. To tackle the exponentially large search space, we use a stabilized auto-encoding objective and a structure tweak procedure to iteratively reduce and refine the search space.
This paper presents the n-gram machine, a model that encodes sentences into simple symbolic representations ("n-grams") which can be queried efficiently. The authors propose a variety of tricks (stabilized autoencoding, structured tweaking) to deal with the huge search space, and they evaluate NGMs on five of the 20 bAbI tasks. I am overall a fan of the general idea of this paper; scaling up to huge inputs is definitely a necessary research direction for QA. However, I have some concerns about the specific implementation and model discussed here. How much of the proposed approach is specific to getting good results on bAbI (e.g., conditioning the knowledge encoder on only the previous sentence, time stamps in the knowledge tuple, super small RNNs, four simple functions in the n-gram machine, structure tweaking) versus having a general-purpose QA model for natural language? Addressing some of these issues would likely prevent scaling to millions of (real) sentences, as the scalability is reliant on programs being efficiently executed (by simple string matching) against a knowledge storage. The paper is missing a clear analysis of NGM's limitations... the examples of knowledge storage from bAbI in the supplementary material are also underwhelming as the model essentially just has to learn to ignore stopwords since the sentences are so simple. In its current form, I am borderline but leaning towards rejecting this paper. Other questions: - is "n-gram" really the most appropriate term to use for the symbolic representation? N-grams are by definition contiguous sequences... The authors may want to consider alternatives. - why focus only on extractive QA? The evaluations are only conducted on 5 of the 20 bAbI tasks, so it is hard to draw any conclusions from the results as to the validity of this approach. Can the authors comment on how difficult it will be to add functions to the list in Table 2 to handle the other 15 tasks? Or is NGM strictly for extractive QA? - beam search is performed on each sentence in the input story to obtain knowledge tuples... while the answering time may not change (as shown in Figure 4) as the input story grows, the time to encode the story into knowledge tuples certainly grows, which likely necessitates the tiny RNN sizes used in the paper. How long does the encoding time take with 10 million sentences? - Need more detail on the programmer architecture, is it identical to the one used in Liang et al., 2017?
iclr_2018_SyL9u-WA-
Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs). In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training. Specifically, we parameterize the transition matrix by its singular value decomposition (SVD), which allows us to explicitly track and control its singular values. We attain efficiency by using tools that are common in numerical linear algebra, namely Householder reflectors for representing the orthogonal matrices that arise in the SVD. By explicitly controlling the singular values, our proposed svdRNN method allows us to easily solve the exploding gradient problem and we observe that it empirically solves the vanishing gradient issue to a large extent. We note that the SVD parameterization can be used for any rectangular weight matrix, hence it can be easily extended to any deep neural network, such as a multi-layer perceptron. Theoretically, we demonstrate that our parameterization does not lose any expressive power, and show how it potentially makes the optimization process easier. Our extensive experimental results also demonstrate that the proposed framework converges faster, and has good generalization, especially in capturing long range dependencies, as shown on the synthetic addition and copying tasks.
This paper suggests a reparametrization of the transition matrix. The proposed reparametrization which is based on Singular Value Decomposition can be used for both recurrent and feedforward networks. The paper is well-written and authors explain related work adequately. The paper is a follow up on Unitary RNNs which suggest a reparametrization that forces the transition matrix to be unitary. The problem of vanishing and exploding gradient in deep network is very challenging and any work that shed lights on this problem can have a significant impact. I have two comments on the experiment section: - Choice of experiments. Authors have chosen UCR datasets and MNIST for the experiments while other experiments are more common. For example, the adding problem, the copying problem and the permuted MNIST problem and language modeling are the common experiments in the context of RNNs. For feedforward settings, classification on CIFAR10 and CIFAR100 is often reported. - Stopping condition. The plots suggest that the optimization has stopped earlier for some models. Is this because of some stopping condition or because of gradient explosion? Is there a way to avoid this? - Quality of figures. Figures are very hard to read because of small font. Also, the captions need to describe more details about the figures.
iclr_2018_H1BHbmWCZ
iclr2018_conference.pdf In this paper we present a thrust in three directions of visual development using supervised and semi-supervised techniques. The first is an implementation of semi-supervised object detection and recognition using the principles of Soft Attention and Generative Adversarial Networks (GANs). The second and the third are supervised networks that learn basic concepts of spatial locality and quantity respectively using Convolutional Neural Networks (CNNs). The three thrusts together are based on the approach of Experiential Robot Learning, introduced in previous publication. While the results are unripe for implementation, we believe they constitute a stepping stone towards autonomous development of robotic visual modules.
This work explores some approaches in the object detection field of computer vision: (a) a soft attention map based on the activations on convolutional layers, (b) a classification regarding the location of an object in a 3x3 grid over the image, (c) an autoencoder that the authors claim to be aware of the multiple object instances in the image. These three proposals are presented in a framework of a robot vision module, although neither the experiments nor the dataset correspond to this domain. From my perspective, the work is very immature and seems away from current state of the art on object detection, both in the vocabulary, performance or challenges. The proposed techniques are assessed in a dataset which is not described and whose results are not compared with any other technique. This important flaw in the evaluation prevents any fair comparison with the state of the art. The text is also difficult to follow. The three contributions seem disconnected and could have been presented in separate works with a more deeper discussion. In particular, I have serious problems understanding: 1. What is exactly the contribution of the CNN pre-trained with IMageNet when learning the soft-attention maps ? The reference to a GAN architecture seems very forced and out of the scope. 2. What is the interest of the localization network ? The task it addresses seems very simple and in any case it requires a manual annotation of a dataset of objects in each of the predefined locations in the 3x3 grid. 3. The authors talk about an autoencoder architecture, but also on a classification network where the labels correspond to the object count. I could not undertstand what is exactly assessed in this section. Finally, the authors violate the double-bind review policy by clearly referring to their previous work on Experiental Robot Learning. I would encourage the authors to focus in one of the research lines they point in the paper and go deeper into it, with a clear understanding of the state of the art and the specific challenges these state of the art techniques may encounter in the case of robotic vision.
iclr_2018_rkEfPeZRb
Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.
The paper proposes a novel way of compressing gradient updates for distributed SGD, in order to speed up overall execution. While the technique is novel as far as I know (eq. (1) in particular), many details in the paper are poorly explained (I am unable to understand) and experimental results do not demonstrate that the problem targeted is actually alleviated. More detailed remarks: 1: Motivating with ImageNet taking over a week to train seems misplaced when we have papers claiming to train ImageNet in 1 hour, 24 mins, 15 mins... 4.1: Lemma 4.1 seems like you want B > 1, or clarify definition of V_B. 4.2: This section is not fully comprehensible to me. - It seems you are confusingly overloading the term gradient and words derived (also in other parts or the paper). What is "maximum value of gradients in a matrix"? Make sure to use something else, when talking about individual elements of a vector (which is constructed as an average of gradients), etc. - Rounding: do you use deterministic or random rounding? Do you then again store the inaccuracy? - I don't understand definition of d. It seems you subtract logarithm of a gradient from a scalar. - In total, I really don't know what is the object that actually gets communicated, and consequently when you remark that this can be combined with QSGD and the more below it, I don't understand it. This section has to be thoroughly explained, perhaps with some illustrative examples. 4.3: allgatherv remark: does that mean that this approach would not scale well to higher number of workers? 4.4: Remarks about quantization and mantissa manipulation are not clear to me again, or what is the point in doing so. Possible because the problems above. 5: I think this section is not too useful unless you can accompany it with actual efficient implementation and contrast the practical performance. 6: Given that I don't understand how you compress the information being communicated, it is hard to believe the utility of the method. The objective was to speed up training time because communication is bottleneck. If you provide 12,000x compression, is it any more practically useful than providing 120x compression? What would be the difference in runtime? Such questions are never discussed. Further, if in the implementation you discuss masking mantissa, I have serious concern about whether the compression protocol is feasible to implement efficiently, without writing some extremely low-level code. I think the soundness of work addressing this particular problem is damaged if not implemented properly (compared to other kinds of works in current ML related research). Therefore I highly recommend including proper time comparison with a baseline in the future. Further, I don't understand 2 things about the Tables. a) how do you combine the proposed method with Momentum in SGD? This is not discussed as far as I can see. b) What is "QSGD, 2bit" If I remember QSGD protocol correctly, there's no natural mapping of 2bit to its parameters.
iclr_2018_HkpRBFxRb
Reinforcement Learning (RL) can model complex behavior policies for goaldirected sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next state's value function. λ-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in λ-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of λ-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to λ-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and λ-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain.
This paper revisits the idea of exponentially weighted lambda-returns at the heart of TD algorithms. The basic idea is that instead of geometrically weighting the n-step returns we should instead weight them according to the agent's own estimate of its confidence in it's learned value function. The paper empirically evaluates this idea on Atari games with deep non-linear state representations, compared to state-of-the-art baselines. This paper is below the threshold because there are issues with the : 1) motivation, 2) the technical details, and (3) the empirical results. The paper begins by stating that the exponential weighting of lambda returns is ad-hoc and unjustified. I would say the idea is well justified in several ways. First the lambda return definition lends itself to online approximations that achieve a fully incremental online form with linear computation and nearly as good performance of the off-line version. Second, decades of empirical results illustrating good performance of TD compared with MC methods. And an extensive literature of theoretical results. The paper claims that the exponential has been noted to be ad-hoc, please provide a reference for this. There have been several works that have noted that lambda can and perhaps should be changed as a function of state (Sutton and Barto, White and White [1], TD-Gammon). In fact, such works even not that lambda should be related to confidence. The paper should work harder to motivate why adapting lambda as a function of state---which has been studied---is not sufficient. I don't completely understand the objective. Returns with higher confidence should be weighted higher, according to the confidence estimate around the value function estimate as a function of state? With longer returns, n>>1, the role of the value function in the target is down-weighted by gamma^n---meaning its accuracy is of little relevance to the target. How does your formalism take this into account? The basic idea of the lambda return assumes TD targets are better than MC targets due to variance, which place more weight on shorter returns. I addition I don't understand how learning confidence of the value function has a realizable target. We do not get supervised targets of the confidence of our value estimates. What is your network updating toward? The work of Konidaris et al [1] is a more appropriate reference for this work (rather than the Thomas reference provided). Your paper does not very clearly different itself from Konidaris's work here. Please expand on this. The experiments have some issues. One issue is that basic baselines could more clearly illustrate what is going on. There are two such baselines: random fixed weightings of the n-step returns, and persisting with the usual weighting but changing lambda on each time step (either randomly or according to some decay schedule). The first baseline is a sanity check to ensure that you are not observing some random effect. The second checks to see if your alternative weighting is simply approximating the benefits of changing lambda with time or state. I would say the current results indicate the conventional approach to TD is working well if not better than the new one. Looking at fig 3, its clear the kangaroo is skewing the results, and that overall the new method is performing worse. This is further conflated by fig7 which attempts to illustrate the quality of the learned value functions. In Kangaroo, the domain where your method does best, the l2 error is worse. On the other hand in sea quest and space invaders, where your method does worse, the l2 error is better. These results seem conflicting, or at least raise more questions than they answer. [1] A Greedy Approach to Adapting the Trace Parameter for Temporal Difference Learning . Adam White and Martha White. Autonomous Agents and Multi-agent Systems (AAMAS), 2016 [2] G. D. Konidaris, S. Niekum, and P. S. Thomas. TDγ: Re-evaluating complex backups in temporal difference learning. In Advances in Neural Information Processing Systems 24, pages 2402–2410. 2011.
iclr_2018_rJYFzMZC-
Published as a conference paper at ICLR 2018 SIMULATING ACTION DYNAMICS WITH NEURAL PROCESS NETWORKS Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.
Summary This paper presents Neural Process Networks, an architecture for capturing procedural knowledge stated in texts that makes use of a differentiable memory, a sentence and word attention mechanism, as well as learning action representations and their effect on entity representations. The architecture is tested for tracking entities in recipes, as well as generating the natural language description for the next step in a recipe. It is compared against a suit of baselines, such as GRUs, Recurrent Entity Networks, Seq2Seq and the Neural Checklist Model. While I liked the overall paper, I am worried about the generality of the model, the qualitative analysis, as well as a fair comparison to Recurrent Entity Networks and non-neural baselines. Strengths I believe the authors made a good effort in comparing against existing neural baselines (Recurrent Entity Networks, Neural Checklist Model) *for their task*. That said, it is unclear to me how generally applicable the method is and whether the comparison against Recurrent Entity Networks is fair (see Weaknesses). I like the ablation study. Weaknesses While I find the Neural Process Networks architecture interesting and I acknowledge that it outperforms Recurrent Entity Networks for the presented tasks, after reading the paper it is not clear to me how generally applicable the architecture is. Some design choices seem rather tailored to the task at hand (manual collection of actions MTurk annotation in section 3.1) and I am wondering where else the authors see their method being applied given that the architecture relies on all entities and actions being known in advance. My understanding is that the architecture could be applied to bAbI and CBT (the two tasks used in the Recurrent Entity Networks paper). If that is the case, a fair comparison to Recurrent Entity Networks would have been to test against Recurrent Entity Networks on these tasks too. If they the architecture cannot be applied in these tasks, the authors should explain why. I am not convinced by the qualitative analysis. Table 2 tells me that even for the best model the entity selection performance is rather unreliable (only 55.39% F1), yet all examples shown in Table 3 look really good, missing only the two entities oil (1) and sprinkles (3). This suggests that these examples were cherry-picked and I would like to see examples that are sampled randomly from the dev set. I have a similar concern regarding the generation task. First, it is not mentioned where the examples in Table 6 are taken from – is it the train, dev or test set? Second, the overall BLEU score seems quite low even for the best model, yet the examples in Table 6 look really good. In my opinion, a good qualitative analysis should also discuss failure cases. Since the BLEU score is so low here, you might also want to compare perplexity of the models. The qualitative analysis in Table 5 is not convincing either. In Appendix A.1 it is mentioned that word embeddings are initialized from word2vec trained on the training set. My suspicion is that one would get the clustering in Table 4 already from those pretrained vectors, maybe even when pretrained on the Google news corpus. Hence, it is not clear what propagating gradients through the Neural Process Networks into the action embeddings adds, or put differently, why does it have to be a differentiable architecture when an NLP pipeline might be enough? This could easily be tested by another ablation where action embeddings are pretrained using word2vec and then fixed during training of the Neural Process Network. Moreover, in 3.3 it is mentioned that even the Action Selection is pretrained, which makes me wonder what is actually trained jointly in the architecture and what is not. I think the difficulty of the task at hand needs to be discussed at some point, ideally early in the paper. Until examples on page 7 are shown, I did not have a sense for why a neural architecture is chosen. For example, in 2.3 it is mentioned that for "wash and cut" the two functions fwash and fcut need to be selected. For this example, this seems trivial as the functions have the same name (and you could even have a function per name!). As far as I understand, the point of the action selector is to only have a fixed number of learned actions and multiple words (cut, slice etc.) should select the same action fcut. Otherwise (if there is little language ambiguity) I would not see the need for a complex neural architecture. Related to that, a non-neural baseline for the entity selection task that in my opinion definitely needs to be added is extracting entities using a pretrained NER system and returning all of them as the selection. p2 Footnote 1: So if I understand this correctly, this work builds upon a dataset of over 65k recipes from Kiddon et al. (2016), but only for 875 of those detailed annotations were created? Minor Comments p1: The statement "most natural language understanding algorithms do not have the capacity …" should be backed by reference. p2: "context representation ht" – I would directly mention that this is a sentence encoding. p3: 2.4: I have the impression what you are describing here is known in the literature as entity linking. p3 Eq.3: Isn't c3*0 always a vector of zeros? p4 Eq.6: W4 is an order-3 tensor, correct? p4 Eq.8: What is YC and WC here and what are their dimensions? I am confused by the softmax, as my understanding (from reading the paragraph on the Action Selection Loss on p.5) was that the expression in the softmax here is a scalar (as it is done for every possible action), so this should be a sigmoid to allow for multiple actions to attain a probability of 1? p5: "See Appendix for details" -> "see Appendix C for details" p5 3.3: Could you elaborate on the heuristic for extracting verb mentions? Is only one verb mention per sentence extracted? p5: "trained to minimize cross-entropy loss" -> "trained to minimize the cross-entropy loss" p5 3.3: What is the global loss? p6: "been read (§2.5." -> "been read (§2.5)." p6: "We encode these vectors using a bidirectional GRU" – I think you composing a fixed-dimensional vector from the entity vectors? What's eI? p7: For which statement is (Kim et al. 2016) the reference? Surely, they did not invent the Hadamard product. p8: "Our model, in contrast" use" -> "Our model, in contrast, uses". p8 Related Work: I think it is important to mention that existing architectures such as Memory Netwroks could, in principle, learn to track entities and devote part of their parameters to learn the effect of actions. What Neural Process Networks are providing is a strong inductive bias for tracking entities and learning the effect of actions that is useful for the task considered in this paper. As mentioned in the weaknesses, this might however come at the price of a less general model, which should be discussed. # Update after the rebuttal Thanks for the clarifications and updating the paper. I am increasing my score by two points and expect to see the ablations as well as the NER baseline mentioned in the rebuttal in the next revision of the paper. Furthermore, I encourage the authors to include the analysis of pretrained word2vec embeddings vs the embeddings learned by this architecture into the paper.
iclr_2018_H1u8fMW0b
We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective. Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data. The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression. Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.
This paper introduces a machine learning adaptation of the active inference framework proposed by Friston (2010), and applies it to the task of image classification on MNIST through a foveated inspection of images. It describes a cognitive architecture for the same, and provide analyses in terms of processing compression and "confirmation biases" in the model. – Active perception, and more specifically recognition through saccades (or viewpoint selection) is an interesting biologically-inspired approach and seems like an intuitive and promising way to improve efficiency. The problem and its potential applications are well motivated. – The perception-driven control formulation is well-detailed and simple to follow. – The achieved compression rates are significant and impressive, though additional demonstration of performance on more challenging datasets would have been more compelling Questions and comments: – While an 85% compression rate is significant, 88% accuracy on MNIST seems poor. A plot demonstrating the tradeoff of accuracy for compression (by varying Href or other parameters) would provide a more complete picture of performance. Knowing baseline performance (without active inference) would help put numbers in perspective by providing a performance bound due to modeling choices. – What does the distribution of number of saccades required per recognition (for a given threshold) look like over the entire dataset, i.e. how many are dead-easy vs difficult? – Steady state assumption: How can this be relaxed to further generalize to non-static scenes? – Figure 3 is low resolution and difficult to read. Post-rebuttal comments: I have revised my score after considering comments from other reviewers and the revised paper. While the revised version contains more experimental details, the paper in its present form lacks comparisons to other gaze selection and saliency models which are required to put results in context. The paper also contains grammatical errors and is somewhat difficult to understand. Finally, while it proposes an interesting formulation of a well-studied problem, more comparisons and analysis are required to validate the approach.
iclr_2018_SkHDoG-Cb
Published as a conference paper at ICLR 2018 SIMULATED+UNSUPERVISED LEARNING WITH ADAPTIVE DATA GENERATION AND BIDIRECTIONAL MAPPINGS Collecting a large dataset with high quality annotations is expensive and timeconsuming. Recently, Shrivastava et al. (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data, and then trains a learning model on the translated data. Bousmalis et al. (2017b) propose a similar framework that jointly trains a translation mapping and a learning model. While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as they do not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. Inspired by this limitation, we propose a new S+U learning algorithm, which fully leverage the flexibility of data simulators and bidirectional mappings between synthetic and real data. We show that our approach achieves the improved performance on the gaze estimation task, outperforming (Shrivastava et al., 2017).
Review, ICLR 2018, Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings Summary: The paper presents several extensions to the method presented in SimGAN (Shirvastava et al. 2017). First, it adds a procedure to make the distribution of parameters of the simulation closer to the one in real world images. A predictor is trained on simulated images created with a manually initialized distribution. This predictor is used to estimate pseudo labels for the unlabeled real-world data. The distribution of the estimated pseudo labels is used produce a new set of simulated images. This process is iterated. Second, it adds the idea of cycle consistency (e.g., from CycleGAN) in order to counter mode collapse and label shift. Third, since cycle consistency does not fully solve the label shift problem, a feature consistency loss is added. Finally, in contrast to ll related methods, the final system used for inference is not a predictor trained on a mix of real and “fake” images from the real-world target domain. Instead the predictor is trained purely on synthetic data and it is fed real world examples by using the back/real-to-sim-generator (trained in the conjunction with the forward mapping cycle) to map the real inputs to “fake” synthetic ones. The paper is well written. The novelty is incremental in most parts, but the overall system can be seen as novel. In particular, I am not aware of any published work that uses of the (backwards) real-to-sim generator plus sim-only trained predictor for inference (although I personally know several people who had the same idea and have been working on it). I like this part because it perfectly makes sense not to let the generator hallucinate real-world effects on rather clean simulated data, but the other way around, remove all kinds of variations to produce a clean image from which the prediction should be easier. The paper should include Bousmalis et al., “Unsupervised Pixel-Level Domain Adaptation With Generative Adversarial Networks”, CVPR 2017 in its discussion, since it is very closely related to Shirvastava et al. 2017. With respect to the feature consistency loss the paper should also discuss related work defining losses over feature activations for very similar reasons, such as in image stylization (e.g. L. A. Gatys et al. “Image Style Transfer Using Convolutional Neural Networks” CVPR 2016, L. A. Gatys et al. “Controlling Perceptual Factors in Neural Style Transfer” CVPR 2017), or the recently presented “Photographic Image Synthesis with Cascaded Refinement Networks”, ICCV 2017. Bousmalis et al., “Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping”, arXiv:1709.07857, even uses the same technique in the context of training GANs. Adaptive Data Generation: I do not fully see the point in matching the distribution of parameters of the real world samples with the simulated data. For the few, easily interpretable parameters in the given task it should be relatively easy to specify reasonable ranges. If the simulation in some edge cases produces samples that are beyond the range of what occurs in the real world, that is maybe not very efficient, but I would be surprised if it ultimately hurt the performance of the predictor. I do see the advantage when training the GAN though, since a good discriminator would learn to pick out those samples as generated. Again, though, I am not very sure whether that would hurt the performance of the overall system in practice. Limiting the parameters to those values of the real world data also seems rather restricting. If the real world data does not cover certain ranges, not because those values are infeasible or infrequent, but just because it so happens that this range was not covered in the data acquisition, the simulation could be used to fill in those ranges. Additionally, the whole procedure of training on sim data and then pseudo-labeling the real data with it is based on the assumption that a predictor trained on simulated data only already works quite well on real data. It might be possible in the case of the task at hand, but for more challenging domain adaptation problem it might not be feasible. There is also no guarantee for the convergence of the cycle, which is also evident from the experiments (Table 1. After three iterations the angle error increases again. (The use of the Hellinger distance is unclear to me since it, as explained in the text, does not correspond with what is being optimized). In the experiments the cycle was stopped after two iterations. However, how would you know when to stop if you didn’t have ground truth labels for the real world data? Comparisons: The experiments should include a comparison to using the forward generator trained in this framework to train a predictor on “fake” real data and test it on real data (ie. a line “ours | RS | R | ?” in Table 2, and a more direct comparison to Shrivastava). This would be necessary to prove the benefit of using the back-generator + sim trained predictor. Detailed comments: * Figure 1 seems not to be referenced in the text. * I don’t understand the choice for reduction of the sim parameters. Why was, e.g., the yaw and pitch parameters of the eyeball set equal to those of the camera? Also, I guess there is a typo in the last equality (pitch and yaw of the camera?). * The Bibliography needs to be checked. Names of journals and conferences are inconsistent, long and short forms mixed, year several times, “Proceedings” multiple times, ...
iclr_2018_ByW5yxgA-
This paper presents a novel variant of hierarchical hidden Markov models (HMMs), the multiscale hidden Markov model (MSHMM), and an associated spectral estimation and prediction scheme that is consistent, finds global optima, and is computationally efficient. Our MSHMM is a generative model of multiple HMMs evolving at different rates where the observation is a result of the additive emissions of the HMMs. While estimation is relatively straightforward, prediction for the MSHMM poses a unique challenge, which we address in this paper. Further, we show that spectral estimation of the MSHMM outperforms standard methods of predicting the asset covariance of stock prices, a widely addressed problem that is multiscale, non-stationary, and requires processing huge amounts of data.
The paper focuses on a very particular HMM structure which involves multiple, independent HMMs. Each HMM emits an unobserved output with an explicit duration period. This explicit duration modelling captures multiple scale of temporal resolution. The actual observations are a weighted linear combination of the emissions from each latent HMM. The structure allows for fast inference using a spectral approach. I found the paper unclear and lacking in detail in several key aspects: 1. It is unclear to me from Algorithm 2 how the weight vectors w are estimated. This is not adequately explained in the section on estimation. 2. The authors make the assumption that each HMM injects noise into the unobserved output which then gets propagated into the overall observation. What are reasons for his choice of model over a simpler model where the output of each HMM is uncorrupted? 3. The simulation example does not really demonstrate the ability of the MSHMM to do anything other than recover structure from data simulated under an MSHMM. It would be more interesting to apply to data simulated under non-Markovian or other setups that would enable richer frequency structures to be included and the ability of MSHMM to capture these. 4. The real data experiments shows some improvements in predictive accuracy with fast inference. However, the authors do not give a sufficiently broad exploration of the representations learnt by the model which allows us to understand the regimes in which the model would be advantageous. Overall, the paper presents an interesting approach but the work lacks maturity. Furthermore, simulation and real data examples to explore the properties and utility of the method are required.
iclr_2018_SJPpHzW0-
We study the problem of explaining a rich class of behavioral properties of deep neural networks. Our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on the property of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by training convolutional neural networks on Pubfig, ImageNet, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations (1) localize features used by the network, (2) isolate features distinguishing related instances, (3) help extract the essence of what the network learned about the class, and (4) assist in debugging misclassifications.
SUMMARY ======== This paper proposes to measure the "influence" of single neurons w.r.t. to a quantity of interest represented by another neuron, typically w.r.t. to an output neuron for a class of interest, by simply taking the gradient of the corresponding output neuron w.r.t to the considered neuron. This gradient is used as is, given a single input instance, or else, gradients are averaged over several input instances. In the latter case the averaging is described by an ad-hoc distribution of interest P which is introduced in the definition of the influence measure, however in the present work only two types of averages are practically used: either the average is performed over all instances belonging to one class, or over all input instances. In other words, standard gradient backpropagation values (or average of them) are used as a proxy to quantify the importance of neurons (these neurons being within hidden layers or at the input layer), and are intended to better explain the classification, or sometimes even misclassification, performed by the network. The proposed importance measure is theoretically justified by stating a few properties (called axioms) an importance measure should generally verify, and then showing the proposed measure fullfills these requirements. Empirically the proposed measure is used to inspect the classification of a few input instances, to extract "class-expert" neurons, and to find a preprocessing bug in one model. The only comparison to a related work method is done qualitatively on one image visualization, where the proposed method is compared to Integrated Gradients [Sundararajan et al. 2017]. WEAKNESSES ========== The similarity and differences between the proposed method and related work is not made clear. For example, in the case of a single input instance, and when the quantity of interest is one output neuron corresponding to one class, the proposed measure is identical to the image-specific class saliency of [Simonyan et al. 2014]. The difference to Integrated Gradients [Sundararajan et al. 2017] at the end of Section 1.1 is also not clearly formulated: why is the constraint on distribution marginality weaker here ? An important class of explanation methods, namely decomposition-based methods (e.g. LRP, Excitation Backprop, Deep Taylor Decomposition), are not mentioned. Recent work (Montavon et al., Digital Signal Processing, 2017), discusses the advantages of decomposition-based methods over gradient-based approaches. Thus, the authors should clearly state the advantages/disadvantes of the proposed gradient-based method over decomposition-based techniques. Concerning the theoretical justification: It is not clear how Axiom 2 ensures that the proposed measure only depends on points within the input data manifold. This is indeed an important issue, since otherwise the gradients in equation (1) might be averaged completely outside the data manifold and thus the influence measure be unrelated to the data and problem the neural network was trained on. Also the notation used in Axiom 5 is very confusing. Moreover it seems this axiom is even not used in the proof of Theorem 2. Concerning the experiments: The experimental setup, especially in Section 3.3.1, is not well defined: on which layer of the network is the mask applied? What is the "quantity of interest": shouldn't it be an output neuron value rather than h|i (as stated at the begin of the fourth paragraph of Section 3.3.1)? The proposed method should to be quantitatively compared with other explanation techniques (e.g. by iteratively perturbing most relevant pixels and tracking the performance drop, see Samek et al., IEEE TNNLS, 2017). The last example of explaining the bug is not very convincing, since the observation that class 2 distinctive features are very small in the image space, and thus might have been erased through gaussian blur, is not directly related to the influence measure and could have been made aso independently from it. CONCLUSION ========== Overall this work does not introduce any new importance measure for neurons, it merely formalizes the use of standard backpropagation gradients as influence measure. Using gradients as importance measure was already done in previous work (e.g. [Simonyan et al. 2014]). Though taking the average of gradients over several input instances is new, this information might not be of great help for practical applications. Recent work also showed that raw gradients are less informative than decomposition-based quantities to explain the classification decisions made by a neural network.
iclr_2018_Hk5elxbRW
Published as a conference paper at ICLR 2018 SMOOTH LOSS FUNCTIONS FOR DEEP TOP-K CLASSIFICATION The top-k error is a common measure of performance in machine learning and computer vision. In practice, top-k classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-k classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-k optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a naïve algorithm would require O( n k ) operations, where n is the number of classes. Thanks to a connection to polynomial algebra and a divideand-conquer approach, we provide an algorithm with a time complexity of O(kn). Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k = 5. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.
The paper is clear and well written. The proposed approach seems to be of interest and to produce interesting results. As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this paper is a promising contribution to handle those issues better. The paper proposes to use a top-k loss such as what has been explored with SVMs in the past, but with deep models. As the loss is not smooth and has sparse gradients, the paper suggests to use a smoothed version where maximums are replaced by log-sum-exps. I have two main concerns with the presentation. A/ In addition to the main contribution, the paper devotes a significant amount of space to explaining how to compute the smoothed loss. This can be done by evaluating elementary symmetric polynomials at well-chosen values. The paper argues that classical methods for such evaluations (e.g., using the usual recurrence relation or more advanced methods that compensate for numerical errors) are not enough when using single precision floating point arithmetic. The paper also advances that GPU parallelization must be used to be able to efficiently train the network. Those claims are not substantiated, however, and the method proposed by the paper seems to add substantial complexity without really proving that it is useful. The paper proposes a divide-and-conquer approach, where a small amount of parallelization can be achieved within the computation of a single elementary symmetric polynomial value. I am not sure why this is of interest - can't the loss evaluation already be parallelized trivially over examples in a training/testing minibatch? I believe the paper could justify this approach better by providing a bit more insights as to why it is required. For instance: - What accuracies and train/test times do you get using standard methods for the evaluation of elementary symmetric polynomials? - How do those compare with CE and L_{5, 1} with the proposed method? - Are numerical instabilities making this completely unfeasible? This would be especially interesting to understand if this explodes in practice, or if evaluations are just a slightly inaccurate without much accuracy loss. B/ No mention is made of the object detection problem, although multiple of the motivating examples in Figure 1 consider cases that would fall naturally into the object detection framework. Although top-k classification considers in principle an easier problem (no localization), a discussion, as well as a comparison of top-k classification vs., e.g., discarding localization information out of object detection methods, could be interesting. Additional comments: - Figure 2b: this visualization is confusing. This is presented in the same figure and paragraph as the CIFAR results, but instead uses a single synthetic data point in dimension 5, and k=1. This is not convincing. An actual experiment using full dataset or minibatch gradients on CIFAR and the same k value would be more interesting.
iclr_2018_ryUlhzWCZ
Published as a conference paper at ICLR 2018 TRUNCATED HORIZON POLICY SEARCH: COMBINING REINFORCEMENT LEARNING & IMITATION LEARNING In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle. We study the effectiveness of the nearoptimal cost-to-go oracle on the planning horizon and demonstrate that the costto-go oracle shortens the learner's planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a onestep greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance. Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning. Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search (THOR), a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal. We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal.
=== SUMMARY === The paper considers a combination of Reinforcement Learning (RL) and Imitation Learning (IL), in the infinite horizon discounted MDP setting. The IL part is in the form of an oracle that returns a value function V^e, which is an approximation of the optimal value function. The paper defines a new cost (or reward) function based on V^e, through shaping (Eq. 1). It is known that shaping does not change the optimal policy. A key aspect of this paper is to consider a truncated horizon problem (say horizon k) with the reshaped cost function, instead of an infinite horizon MDP. For this truncated problem, one can write the (dis)advantage function as a k-step sum of reward plus the value returned by the oracle at the k-th step (cf. Eq. 5). Theorem 3.3 shows that the value of the optimal policy of the truncated MDP w.r.t. the original MDP is only O(gamma^k eps) worse than the optimal policy of the original problem (gamma is the discount factor and eps is the error between V^e and V*). This suggests two things: 1) Having an oracle that is accurate (small eps) leads to good performance. If oracle is the same as the optimal value function, we do not need to plan more than a single step ahead. 2) By planning for k steps ahead, one can decrease the error in the oracle geometrically fast. In the limit of k —> inf, the error in the oracle does not matter. Based on this insight, the paper suggests an actor-critic-like algorithm called THOR (Truncated HORizon policy search) that minimizes the total cost over a truncated horizon with a modified cost function. Through a series of experiments on several benchmark problems (inverted pendulum, swimmer, etc.), the paper shows the effect of planning horizon k. === EVALUATION & COMMENTS === I like the main idea of this paper. The paper is also well-written. But one of the main ideas of this paper (truncating the planning horizon and replacing it with approximation of the optimal value function) is not new and has been studied before, but has not been properly cited and discussed. There are a few papers that discuss truncated planning. Most closely is the following paper: Farahmand, Nikovski, Igarashi, and Konaka, “Truncated Approximate Dynamic Programming With Task-Dependent Terminal Value,” AAAI, 2016. The motivation of AAAI 2016 paper is different from this work. The goal there is to speedup the computation of finite, but large, horizon problem with a truncated horizon planning. The setting there is not the combination of RL and IL, but multi-task RL. An approximation of optimal value function for each task is learned off-line and then used as the terminal cost. The important point is that the learned function there plays the same role as the value provided by the oracle V^e in this work. They both are used to shorten the planning horizon. That paper theoretically shows the effect of various error terms, including terms related to the approximation in the planning process (this paper does not do that). Nonetheless, the resulting algorithms are quite different. The result of this work is an actor-critic type of algorithm. AAAI 2016 paper is an approximate dynamic programming type of algorithm. There are some other papers that have ideas similar to this work in relation to truncating the horizon. For example, the multi-step lookahead policies and the use of approximate value function as the terminal cost in the following paper: Bertsekas, “Dynamic Programming and Suboptimal Control: A Survey from ADP to MPC,” European Journal of Control, 2005. The use of learned value function to truncate the rollout trajectory in a classification-based approximate policy iteration method has been studied by Gabillon, Lazaric, Ghavamzadeh, and Scherrer, “Classification-based Policy Iteration with a Critic,” ICML, 2011. Or in the context of Monte Carlo Tree Search planning, the following paper is relevant: Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, 2016. Their “value network” has a similar role to V^e. It provides an estimate of the states at the truncated horizon to shorten the planning depth. Note that even though these aforementioned papers are not about IL, this paper’s stringent requirement of having access to V^e essentially make it similar to those papers. In short, a significant part of this work’s novelty has been explored before. Even though not being completely novel is totally acceptable, it is important that the paper better position itself compared to the prior art. Aside this main issue, there are some other comments: - Theorem 3.1 is not stated clearly and may suggest more than what is actually shown in the proof. The problem is that it is not clear about the fact the choice of eps is not arbitrary. The proof works only for eps that is larger than 0.5. With the construction of the proof, if eps is smaller than 0.5, there would not be any error, i.e., J(\hat{pi}^*) = J(pi^*). The theorem basically states that if the error is very large (half of the range of value function), the agent does not not perform well. Is this an interesting case? - In addition to the papers I mentioned earlier, there are some results suggesting that shorter horizons might be beneficial and/or sufficient under certain conditions. A related work is a theorem in the PhD dissertation of Ng: Andrew Ng, Shaping and Policy Search in Reinforcement Learning, PhD Dissertation, 2003. (Theorem 5 in Appendix 3.B: Learning with a smaller horizon). It is shown that if the error between Phi (equivalent to V^e here) and V* is small, one may choose a discount factor gamma’ that is smaller than gamma of the original MDP, and still have some guarantees. As the discount factor has an interpretation of the effective planning horizon, this result is relevant. The result, however, is not directly comparable to this work as the planning horizon appears implicitly in the form of 1/(1-gamma’) instead of k, but I believe it is worth to mention and possibly compare. - The IL setting in this work is that an oracle provides V^e, which is the same as (Ross & Bagnell, 2014). I believe this setting is relatively restrictive as in many problems we only have access to (state, action) pairs, or sequence thereof, and not the associated value function. For example, if a human is showing how a robot or a car should move, we do not easily have access to V^e (unless the reward function is known and we estimate the value with rollouts; which requires us having a long trajectory). This is not a deal breaker, and I would not consider this as a weakness of the work, but the paper should be more clear and upfront about this. - The use of differential operator nabla instead of gradient of a function (a vector field) in Equations (10), (14), (15) is non-standard. - Figures are difficult to read, as the colors corresponding to confidence regions of different curves are all mixed up. Maybe it is better to use standard error instead of standard deviation. === After Rebuttal: Thank you for your answer. The revised paper has been improved. I increase my score accordingly.
iclr_2018_SJgf6Z-0W
We introduce a new approach to estimate continuous actions using actor-critic algorithms for reinforcement learning problems. Policy gradient methods usually predict one continuous action estimate or parameters of a presumed distribution (most commonly Gaussian) for any given state which might not be optimal as it may not capture the complete description of the target distribution. Our approach instead predicts M actions with the policy network (actor) and then uniformly sample one action during training as well as testing at each state. This allows the agent to learn a simple stochastic policy that has an easy to compute expected return. In all experiments, this facilitates better exploration of the state space during training and converges to a better policy.
This paper describes an approach to stochastic control using RL that extends DDPG with a stochastic policy. A standard DDPG setup is extended such that the actor now produces M actions at each timestep. Only one of the M actions will be executed in the environment using a uniform sampling. The sampled action is the only that will receive a gradient update from the critic network. The authors demonstrate that such a stochastic policy performs better on average in a series of benchmark control tasks. I find the general idea of the work compelling, but the particular approach is rather poor. The fact that we are choosing the number of modes in the uniform distribution is a bit underwhelming (a more compelling architecture could have proposed a policy conditioned on gaussian noise for example, thus having better coverage of the distribution). I found the proposed apprach to be under-analyzed and the stochastic aspects of the policy are undervalued. The main claim being argued in the paper is that the proposed stochastic policy has better final performance on average than a deterministic policy, but the only practical difference seems to be a slightly more structured approach to exploration. However, very little attention is paid to trying different exploration methods with the deterministic policy (incidentally, Ornstein-Uhlenbeck process noise is not something I'm familiar with, a citation to the use of this noise for exploration as well as a more explicit explanation would have been appreciated). One interpretation is that each of the M sub-policies follows a different mode of the Q-value distribution over the action space. But is this indeed the case? There is a brief analysis of this with cartpole, but a more complete look at how actions are clustered in the action space would make this paper much more compelling. Even in higher-dimensional action spaces, you could look at a t-SNE projection or cluster analysis to try and see how many modes the agent is reasoning over. Additionally, the baseline agent should have used additional exploration methods as these can quickly change the performance of the agent. I also think that better learning is not the only redeeming aspect of a stochastic policy. In the face of a non-stationary environment, a stochastic policy will likely be much more robust. Additionally, it will have much better performance against adversarial environments. Given the remaining available space in the paper it would have been interesting to provide more insight into the proposed methods gains in these areas.
iclr_2018_H1K6Tb-AZ
For inference operations in deep neural networks on end devices, it is desirable to deploy a single pre-trained neural network model, which can dynamically scale across a computation range without comprising accuracy. To achieve this goal, Incomplete Dot Product (IDP) has been proposed to use only a subset of terms in dot products during forward propagation. However, there are some limitations, including noticeable performance degradation in operating regions with low computational costs, and essential performance limitations since IDP uses hand-crafted profile coefficients. In this paper, we extend IDP by proposing new training algorithms involving a single profile, which may be trainable or pre-determined, to significantly improve the overall performance, especially in operating regions with low computational costs. Specifically, we propose the Task-wise Early Stopping and Loss Aggregation (TESLA) algorithm, which is showed in our 3-layer multilayer perceptron on MNIST that outperforms the original IDP by 32% when only 10% of dot products terms are used and achieves 94.7% accuracy on average. By introducing trainable profile coefficients, TESLA further improves the accuracy to 95.5% without specifying coefficients in advance. Besides, TESLA is applied to the VGG-16 model, which achieves 80% accuracy using only 20% of dot product terms on CIFAR-10 and also keeps 60% accuracy using only 30% of dot product terms on CIFAR-100, but the original IDP performs like a random guess in these two datasets at such low computation costs. Finally, we visualize the learned representations at different dot product percentages by class activation map and show that, by applying TESLA, the learned representations can adapt over a wide range of operation regions.
An approach to adjust inference speed, power consumption or latency by using incomplete dot products McDanel et al. (2017) is investigated. The approach is based on `profile coefficients’ which are learned for every channel in a convolution layer, or for every column in the fully connected layer. Based on the magnitude of this profile coefficient, which determines the importance of this `filter,’ individual components in a neural net are switched on or off. McDanel et al. (2017) propose to train such an approach in a stage-by-stage manner. Different from a recently proposed method by McDanel et al. (2017), the authors of this submission argue that the stage-by-stage training doesn’t fully utilize the deep net performance. To address this issue a `loss aggregation’ is proposed which jointly optimizes a deep net when multiple fractions of incomplete products are used. The method is evaluated on the MNIST and CIFAR-10 datasets and shown to outperform work on incomplete dot products by McDanel et al. (2017) by 32% in the low resource regime. Summary: —— In summary, I think the paper proposes an interesting approach but more work is necessary to demonstrate the effectiveness of the discussed method. The results are preliminary and should be extended to CIFAR-100 and ImageNet to be convincing. In addition, the writing should be improved as it is often ambiguous. See below for details. Review: ————— 1. Experiments are only provided on very small datasets. According to my opinion, this isn’t sufficient to illustrate the effectiveness of the proposed approach. As a reader I wouldn’t want to see results on CIFAR-100 and ImageNet using multiple network architectures, e.g., AlexNet and VGG16. 2. Usage of the incomplete dot product for the fully connected layer and the convolutional layer seems inconsistent. More specifically, while the profile coefficient is applied for every input element in Eq. (1), it’s applied based on output channels in Eq. (2). This seems inconsistent and a comment like `These two approaches, however, are equivalent with negligible difference induced by the first hidden layer’ is more confusing than clarifying. 3. The writing should be improved significantly and statements should be made more precise, e.g., `From now on, x% DP, where \leq x \geq 100, means the x% of terms used in dot products’. While sentences like those can be deciphered, they aren’t that appealing. 4. The loss functions in Eq. (3) should be made more precise. It remains unclear whether the profile coefficients and the weights are trained jointly, separately, incrementally etc. 5. Algorithm 1 and Algorithm 2 call functions that aren’t described/defined. 6. Baseline numbers for training on datasets without incomplete dot products should be provided.
iclr_2018_ryBnUWb0b
Published as a conference paper at ICLR 2018 PREDICTING FLOOR LEVEL FOR 911 CALLS WITH NEURAL NETWORKS AND SMARTPHONE SENSOR DATA In cities with tall buildings, emergency responders need an accurate floor level location to find 911 callers quickly. We introduce a system to estimate a victim's floor level via their mobile device's sensor data in a two-step process. First, we train a neural network to determine when a smartphone enters or exits a building via GPS signal changes. Second, we use a barometer equipped smartphone to measure the change in barometric pressure from the entrance of the building to the victim's indoor location. Unlike impractical previous approaches, our system is the first that does not require the use of beacons, prior knowledge of the building infrastructure, or knowledge of user behavior. We demonstrate real-world feasibility through 63 experiments across five different tall buildings throughout New York City where our system predicted the correct floor level with 100% accuracy.
Update: Based on the discussions and the revisions, I have improved my rating. However I still feel like the novelty is somewhat limited, hence the recommendation. ====================== The paper introduces a system to estimate a floor-level via their mobile device's sensor data using an LSTM to determine when a smartphone enters or exits a building, then using the change in barometric pressure from the entrance of the building to indoor location. Overall the methodology is a fairly simple application of existing methods to a problem, and there remain some methodological issues (see below). General Comments - The claim that the bmp280 device is in most smartphones today doesn’t seem to be backed up by the “comScore” reference (a simple ranking of manufacturers). Please provide the original source for this information. - Almost all exciting results based on RNNs are achieved with LSTMs, so calling an RNN with LSTM hidden units a new name IOLSTM seems rather strange - this is simply an LSTM. - There exist models for modelling multiple levels of abstraction, such as the contextual LSTM of [1]. This would be much more satisfying that the two level approach taken here, would likely perform better, would replace the need for the clustering method, and would solve issues such as the user being on the roof. The only caveat is that it may require an encoding of the building (through a one-hot encoding) to ensure that the relationship between the floor height and barometric pressure is learnt. For unseen buildings a background class could be used, the estimators as used before, or aggregation of the other buildings by turning the whole vector on. - It’s not clear if a bias of 1 was added to the forget gate of the LSTM or not. This has been shown to improve results [2]. - Overall the whole pipeline feels very ad-hoc, with many hand-tuned parameters. Notwithstanding the network architecture, here I’m referring to the window for the barometric pressure, the Jaccard distance threshold, the binary mask lengths, and the time window for selecting p0. - Are there plans to release the data and/or the code for the experiments? Currently the results would be impossible to reproduce. - The typo of accuracy given by the authors is somewhat worrying, given that the result is repeated several times in the paper. Typographical Issues - Page 1: ”floor-level accuracy” back ticks - Page 4: Figure 4.1→Figure 1; Nawarathne et al Nawarathne et al.→Nawarathne et al. - Page 6: ”carpet to carpet” back ticks - Table 2: What does -4+ mean? - References. The references should have capitalisation where appropriate.For example, Iodetector→IODetector, wi-fi→Wi-Fi, apple→Apple, iphone→iPhone, i→I etc. [1] Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and LarryHeck. Contextual LSTM (CLSTM) models for large scale NLP tasks. arXivpreprint arXiv:1602.06291, 2016. [2] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. InProceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342–2350,2015
iclr_2018_r1QZ3zbAZ
Modern machine learning algorithms are often susceptible to adversarial examples -maliciously crafted inputs that are undetectable by humans but that fool the algorithm into producing undesirable behavior. In this work, we show that adversarial examples exist in natural language classification: we formalize the notion of an adversarial example in this setting and describe algorithms that construct such examples. Adversarial perturbations can be crafted for a wide range of tasks -including spam filtering, fake news detection, and sentiment analysis -and affect different models -convolutional and recurrent neural networks as well as linear classifiers to a lesser degree. Constructing an adversarial example involves replacing 10-30% of words in a sentence with synonyms that don't change its meaning. Up to 90% of input examples admit adversarial perturbations; furthermore, these perturbations retain a degree of transferability across models. Our findings demonstrate the existence of vulnerabilities in machine learning systems and hint at limitations in our understanding of classification algorithms.
The paper shows that neural networks are sensitive to adversarial perturbation for a set of NLP text classifications. They propose constructing (model-dependent) adversarial examples by optimizing a function J (that doesn't seem defined in the paper) subject to a constraint c(x, x') < \gamma (i.e. that the original input and adversarial input should be similar) c is composed of two constraints: 1. || v - v' ||_2 < \gamma_1, where v and v' are bag of embeddings for each input 2. |log P(x') - log P(x)| < \gamma_2 where P is a language model The authors then show that for 3 classification problems "Trec07p", "Yelp", and "News" and 4 models (Naive Bayes, LSTM, word CNNs, deep-char-CNNs) that the models that perform considerably worse on adversarial examples than on the test set. Furthermore to test the validity of their adversarial examples, the authors show the following: 1. Humans achieve somewhat similar accuracy on the original adverarial examples (8 points higher on one dataset and 8 points lower on the other two) 2. Humans rate the writing quality of both the original and adversarial examples to be similar 3. The adversarial examples only somewhat transfer across models My main questions/complaints/suggestions for the paper are: -Novelty/Methodology. The paper has mediocre novelty given other similar papers recently. On question I have is about whether the generated examples are actually close to the original examples. The authors do show some examples that do look good, but do not provide any systematic study (e.g. via human annotation) This is a key challenge in NLP (as opposed to vision where the inputs are continuous so it is easy to perturb them and be reasonably sure that the image hasn't changed much). In NLP however, the words are discrete, and the authors measure the difference between an original example and the adversary only in continuous space which may not actually be a good measure of how different they are. They do have some constraint that the fraction of changed words cannot differ by more than delta, but delta = 0.5 in the experiments, which is really large! (i.e. 50% of the words could be different according to Algorithm 1) -Writing: the function J is never mathematically defined, neither is the function c (except that it is known to be composed of the semantic/syntactic similarity constraints). The authors talk about "syntactic" similarity but then propose a language model constraint. I think is a better word is "fluency" constraint. The results in Table 3 and Table 6 seem different, shouldn't the diagonal of Table 6 line up with the results in Table 3? -Experimental methodology (more of a question since authors are unclear): The authors write that "all adversarial examples are generated and evaluated on the test set". There are many hyperparameters in the proposed authors' approach, are these also tuned on the test set? That is unfair to the base classifier. The adversarial model should be tuned on the validation set, and then the same model should be used to generate test set examples. (The authors can even show the validation adversarial accuracy to show how/if it deviates from the test accuracy) -Lack of related work in NLP (see the anonymous comment for some examples). Even the related work in NLP that is cited e.g. Jia and Liang 2017 is obfuscated in the last page. The authors' introduction only refers to related works in vision/speech and ignores related NLP work. Furthermore, adversarial perturbation is related to domain transfer (since both involve shifts between the training and test distribution) and it is well known for instance that models that are trained on Wall Street Journal perform poorly on other domains. See SJ Pan and Q Yang, A Survey on transfer learning, 2010, for some example references.
iclr_2018_B1nZ1weCZ
Published as a conference paper at ICLR 2018 LEARNING TO MULTI-TASK BY ACTIVE SAMPLING One of the long-standing challenges in Artificial Intelligence for learning goaldirected behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large expert networks which require extensive data and computation time for training. In this work, we propose an efficient multi-task learning framework which solves multiple goaldirected tasks in an on-line setup without the need for expert supervision. Our work uses active learning principles to achieve multi-task learning by sampling the harder tasks more than the easier ones. We propose three distinct models under our active sampling framework. An adaptive method with extremely competitive multi-tasking performance. A UCB-based meta-learner which casts the problem of picking the next task to train on as a multi-armed bandit problem. A meta-learning method that casts the next-task picking problem as a full Reinforcement Learning problem and uses actor critic methods for optimizing the multi-tasking performance directly. We demonstrate results in the Atari 2600 domain on seven multi-tasking instances: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance.
The authors show empirically that formulating multitask RL itself as an active learning and ultimately as an RL problem can be very fruitful. They design and explore several approaches to the active learning (or active sampling) problem, from a basic change to the distribution to UCB to feature-based neural-network based RL. The domain is video games. All proposed approaches beat the uniform sampling baselines and the more sophisticated approaches do better in the scenarios with more tasks (one multitask problem had 21 tasks). Pros: - very promising results with an interesting active learning approach to multitask RL - a number of approaches developed for the basic idea - a variety of experiments, on challenging multiple task problems (up to 21 tasks/games) - paper is overall well written/clear Cons: - Comparison only to a very basic baseline (i.e. uniform sampling) Couldn't comparisons be made, in some way, to other multitask work? Additional comments: - The assumption of the availability of a target score goes against the motivation that one need not learn individual networks .. authors say instead one can use 'published' scores, but that only assumes someone else has done the work (and furthermore, published it!). The authors do have a section on eliminating the need by doubling an estimate for each task) which makes this work more acceptable (shown for 6 tasks or MT1, compared to baseline uniform sampling). Clearly there is more to be done here for a future direction (could be mentioned in future work section). - The averaging metrics (geometric, harmonic vs arithmetic, whether or not to clip max score achieved) are somewhat interesting, but in the main paper, I think they are only used in section 6 (seems like a waste of space). Consider moving some of the results, on showing drawbacks of arithmetic mean with no clipping (table 5 in appendix E), from the appendix to the main paper. - The can be several benefits to multitask learning, in particular time and/or space savings in learning new tasks via learning more general features. Sections 7.2 and 7.3 on specificity/generality of features were interesting. --> Can the authors show that a trained network (via their multitask approached) learns significantly faster on a brand new game (that's similar to games already trained on), compared to learning from scratch? --> How does the performance improve/degrade (or the variance), on the same set of tasks, if the different multitask instances (MT_i) formed a supersets hierarchy, ie if MT_2 contained all the tasks/games in MT_1, could training on MT_2 help average performance on the games in MT_1 ? Could go either way since the network has to allocate resources to learn other games too. But is there a pattern? - 'Figure 7.2' in section 7.2 refers to Figure 5. - Can you motivate/discuss better why not providing the identity of a game as an input is an advantage? Why not explore both possibilities? what are the pros/cons? (section 3)
iclr_2018_ByJIWUnpW
Published as a conference paper at ICLR 2018 AUTOMATICALLY INFERRING DATA QUALITY FOR SPATIOTEMPORAL FORECASTING Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on. While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable. Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results. The problem could be exacerbated in black-box prediction models, such as deep neural networks. In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels. Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures. We evaluate our proposed method on forecasting temperatures in Los Angeles.
The paper is an application of neural nets to data quality assessment. The authors introduce a new definition of data quality that relies on the notion of local variation defined in (Zhou and Schölkopf, 2004), and they extend it to multiple heterogenous data sources. The data quality function is learned using a GCN as defined in (Kipf and Welling, 2016). 1) How many neighbors are used in the experiments? Is this fixed or is defined purely by the Gaussian kernel weights as mentioned in 4.2? Setting all weights less than 0.9 to zero seems quite abrupt. Could you provide a reason for this? How many neighbors fit in this range? 2) How many data points? What is the temporal resolution of your data (every day/hour/minute/etc.)? What is the value of N, T? 3) The bandwidth of the Gaussian kernel (\gamma) is quite different (0.2 and 0.6) for the two datasets from Weather Underground (WU) and Weather Bug (WB) (sect. 4.2). The kernel is computed on the features (e.g., latitude, longitude, vegetation fraction, etc.). Location, longitude, distance from coast, etc. are the same no matter the data source (WU or WB). Maybe the way they compute other features (e.g., vegetation fraction) vary slightly, but I would guess the \gamma should be (roughly) the same. 4) Are the features (e.g., latitude, longitude, vegetation fraction, etc.) normalized in the kernel? If they are given equal weight (which is in itself questionable) they should be normalized, otherwise some of them will always dominate the distances. If they were indeed normalized, that should be made clear in the paper. 5) Why do you choose the summer months? How does the framework perform on other months and other meteorological signals except temperature? The application is very nice and complex, but I find that the experiments are a little bit too limited. 6) The notations w and W are used for different things, and that is slightly confusing. Usually one is used as a matrix notation of the other. 7) I would tend to associate data quality with how noisy observations are at a certain node, and not heterogeneity. It would be good to add some discussion on noise in the paper. How do you define an anomaly in 5.2? Is it a spatial or temporal anomaly? Not sure I understand the difference between an anomaly and a bridge node. 8) “A bridge node highly likely has a lower data quality level due to the heterogeneity”. I would think a bridge node is very relevant, and I don't necessarily see it as having a lower data quality. This approach seems to give more weight to very similar data, while discarding the transitions, which in a meteorological setting, could be the most relevant ?! 9) Update the references, some papers have updated information (e.g., Bruna et al. - ICLR 2014, Kipf and Welling – ICLR 2017, etc.). Quality – The experiments need more work and editing as mentioned in the comments. Clarity – The framework is fairly clearly presented, however the English needs significant improvement. Originality – The paper is a nice application of machine learning and neural nets to data quality assessment, and the forecasting application is relevant and challenging. However the paper mostly provides a framework that relies on existing work. Significance – While the paper could be relevant to data quality applications by introducing advanced machine learning techniques, it has limited reach outside the field. Maybe publish it in a data quality conference/journal.
iclr_2018_SySaJ0xCZ
Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.
This paper proposes a variant of neural architecture search. It uses established work on network morphisms as a basis for defining a search space. Experiments search for effective CNN architectures for the CIFAR image classification task. Positives: (1) The approach is straightforward to implement and trains networks in a reasonable amount of time. (2) An advantage over prior work, this approach integrates architectural evolution with the training procedure. Networks are incrementally grown; child networks are initialized with learned parameters from their parents. This eliminates the need to restart training when making an architectural change, and drastically speeds the search. Negatives: (1) The state-of-the-art CNN architectures are not mysterious or difficult to find, despite the paper's characterization of them being so. Indeed, ResNet and DenseNet designs are both guided by extremely simple principles: stack a series of convolutional layers, pool occasionally, and use some form of skip-connection throughout. The need for architectural search is unclear. (2) The proposed search space is boring. As described in Section 4, the possibly evolutionary changes are limited to deepening the network, widening the network, and adding a skip connection. But these are precisely the design aspects that have been well-explored by human trial and error and for which good rules of thumb are already available. (3) As a consequence of (1) and (2), the result is essentially rigged. Since only depth, width, and skip connections are considered, the end network must end up looking like a ResNet or DenseNet, but with some connections pruned. There is no way to discover a network outside of the principled design space articulated in point (1) above. Indeed, the discovered network diagrams (Figures 4 and 5) fall in this space. (4) Performance is worse than the best hand-designed baselines. One would hope that, even if the search space is limited, the discovered networks might be more efficient or higher performing in comparison to the human designs which fall within that same space. However, the results in Tables 3 and 4 show this not to be the case. The best human designs outperform the evolved networks. Moreover, the evolved networks are woefully inefficient in terms of parameter count. Together, these negatives imply the proposed approach is not yet at the point of being useful in practice. I think further work is required (perhaps expanding the search space) to resolve the current limitations of automated architecture search. Misc: Tables 3 and 4 would be easier to parse if resources were simply reported in terms of total GPU hours.
iclr_2018_BJIgi_eCZ
FUSIONNET: FUSING VIA FULLY-AWARE ATTENTION WITH APPLICATION TO MACHINE COMPREHENSION This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "history of word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it identifies an attention scoring function that better utilizes the "history of word" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.
(Score before author revision: 4) (Score after author revision: 7) I think the authors have taken both the feedback of reviewers as well as anonymous commenters thoroughly into account, running several ablations as well as reporting nice results on an entirely new dataset (MultiNLI) where they show how their multi level fusion mechanism improves a baseline significantly. I think this is nice since it shows how their mechanism helps on two different tasks (question answering and natural language inference). Therefore I would now support accepting this paper. ------------(Original review below) ----------------------- The authors present an enhancement to the attention mechanism called "multi-level fusion" that they then incorporate into a reading comprehension system. It basically takes into account a richer context of the word at different levels in the neural net to compute various attention scores. i.e. the authors form a vector "HoW" (called history of the word), that is defined as a concatenation of several vectors: HoW_i = [g_i, c_i, h_i^l, h_i^h] where g_i = glove embeddings, c_i = COVE embeddings (McCann et al. 2017), and h_i^l and h_i^h are different LSTM states for that word. The attention score is then a function of these concatenated vectors i.e. \alpha_{ij} = \exp(S(HoW_i^C, HoW_j^Q)) Results on SQuAD show a small gain in accuracy (75.7->76.0 Exact Match). The gains on the adversarial set are larger but that is because some of the higher performing, more recent baselines don't seem to have adversarial numbers. The authors also compare various attention functions (Table 5) showing a particularone (Symmetric + ReLU) works the best. Comments: -I feel overall the contribution is not very novel. The general neural architecture that the authors propose in Section 3 is generally quite similar to the large number of neural architectures developed for this dataset (e.g. some combination of attention between question/context and LSTMs over question/context). The only novelty is these "HoW" inputs to the extra attention mechanism that takes a richer word representation into account. -I feel the model is seems overly complicated for the small gain (i.e. 75.7->76.0 Exact Match), especially on a relatively exhausted dataset (SQuAD) that is known to have lots of pecularities (see anonymous comment below). It is possible the gains just come from having more parameters. -The authors (on page 6) claim that that by running attention multiple times with different parameters but different inputs (i.e. \alpha_{ij}^l, \alpha_{ij}^h, \alpha_{ij}^u) it will learn to attend to "different regions for different level". However, there is nothing enforcing this and the gains just probably come from having more parameters/complexity.
iclr_2018_ryA-jdlA-
Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word analogy problem for the case when all terms involved in the problem are nouns. My approach solves the word analogy problem using a small fraction of the data that is typically used to train word vectors. My results demonstrate the ambiguities associated with learning the relationship between a word pair, and the role of the training dataset in determining the relationship which gets most highlighted. Furthermore, my results show that the ability of a model to accurately solve the word analogy problem may not be indicative of a model's ability to learn the relationship between a word pair the way a human does.
This paper proposes a new method for solving the analogy task, which can potentially provide some insight as to why word2vec recovers word analogies. In my view, there are three main issues with this paper: (1) the assumptions it makes about our understanding of the analogy phenomenon; (2) the authors' understanding of the proposed method, what it models, and its relation to prior art; (3) the very selective subset of analogies that the author used for evaluation. ASSUMPTIONS The author assumes that there the community does not understand why word embedding methods such as word2vec recover analogies. I believe that, in fact, we do have a good understanding of this phenomena. Levy & Goldberg [1] showed that optimizing x for argmax(cos(x, A - B + C)) is equivalent to optimizing x for argmax(cos(x, A) - cos(x, B) + cos(x, C)) which means that one can interpret this objective as searching for a word x that is similar to A, similar to C, and dis-similar to B. Linzen [2] cemented this explanation by removing the negative term (-cos(x, B)) and showing that for a wide variety of analogies, the method still works. Drozd et al [3] and Rogers et al [4] also argue that the original datasets used by Mikolov et al were too easy because they focused on encyclopedic facts, and expand these datasets to other non-encyclopedic relations, which are significantly more difficult to solve using simple vector arithmetic. In other words, we know why word2vec is able to solve analogies via vector arithmetic: because many analogies (like those in Mikolov et al's original dataset) are "gameable", and can be solved by finding a term that is similar to A and similar to C at the same time. For example, if A="woman" and C="king", then x="queen" fits the bill. METHOD From what I can understand, the proposed method models the 3-way co-occurrence between A, B, and a context noun (let's call it R). Leveraging the distribution of (X, R, Y) for solving problems in lexical semantics has been studied quite a bit in the past, e.g. Latent Relational Analysis [5] and even Hearst patterns. I think the current description overlooks this major deviation from word2vec and other distributional methods, which only model the 2-way co-occurrence (X, R). This is a much more profound difference than just filtering non-nouns. I think the proposed method should be redescribed in these terms, and compared to other work that modeled 3-way co-occurrences. EVALUATION DATA The author evaluates their method on a subset of the original analogy task, which is very limited. I would like to see an evaluation on (A) the original two datasets of Mikolov et al (without non-nouns), and (B) the larger datasets provided by Drozd et al [3] and Rogers et al [4]. In addition, since I think the analogy phenomenon is well understood, I would like to see some demonstration that this method has added value beyond the analogy benchmark. MISCELLANEOUS COMMENTS * The author does not state the important fact that when searching for the closest x to A - B + C, the search omits A, B, and C. It is often the case that the result is A or C without this omission. * The paper is partially de-anonymized (e.g. links and acknowledgements). * One of the problems with modeling 3-way co-occurrences (as opposed to 2-way co-occurrences) is that they are much sparser. I think this is a more precise explanation for why the currency relation is particularly hard to capture with this method. [1] http://www.aclweb.org/anthology/W14-1618 [2] http://anthology.aclweb.org/W16-2503 [3] http://aclweb.org/anthology/C/C16/C16-1332.pdf [4] http://www.aclweb.org/anthology/S17-1017 [5] https://arxiv.org/pdf/cs/0508053.pdf
iclr_2018_rJR2ylbRb
Nodes residing in different parts of a graph can have similar structural roles within their local network topology. The identification of such roles provides key insight into the organization of networks and can also be used to inform machine learning on graphs. However, learning structural representations of nodes is a challenging unsupervised-learning task, which typically involves manually specifying and tailoring topological features for each node. Here we develop GRAPHWAVE, a method that represents each node's local network neighborhood via a low-dimensional embedding by leveraging spectral graph wavelet diffusion patterns. We prove that nodes with similar local network neighborhoods will have similar GRAPHWAVE embeddings even though these nodes may reside in very different parts of the network. Our method scales linearly with the number of edges and does not require any hand-tailoring of topological features. We evaluate performance on both synthetic and real-world datasets, obtaining improvements of up to 71% over state-of-the-art baselines.
The term "structural equivalence" is used incorrectly in the paper. From sociology, two nodes with the same position are in an equivalence relation. An equivalence, Q, is any relation that satisfies these three conditions: - Transitivity: (a,b), (b,c) ∈ Q ⇒ (a,c) ∈Q - Symmetry: (a, b) ∈ Q if and only if (b, a) ∈Q - Reflexivity: (a, a) ∈Q There are three deterministic equivalences: structural, automorphic, and regular. From Lorrain & White (1971), two nodes u and v are structurally equivalent if they have the same relationships to all other nodes. Exact structural equivalence is rare in real-world networks. From Borgatti, et al. (1992) and Sparrow (1993), two nodes u and v are automorphically equivalent if all the nodes can be relabeled to form an isomorphic graph with the labels of u and v interchanged. From Everett & Borgatti (1992), two nodes u and v are regularly equivalent if they are equally related to equivalent others. Parts of this statement are false: "A notable example of such approaches is RolX (Henderson et al., 2012), which aims to recover a soft-clustering of nodes into a predetermined number of K distinct roles using recursive feature extraction (Henderson et al., 2011)." RolX (as described in KDD 2012 paper) uses MDL to automatically determine the number of roles. As indicated above, this statement is also false: "We note that RolX requires the number of desired structural classes as input, ...". The paper does not discuss how the free parameter d (which represents the number of evenly spaced sample points) is chosen. This statement is misleading: "In particular, a small perturbation of the graph yields small perturbations of the eigenvalues." What is considered a small perturbation? One can delete an edge (seemingly a small perturbation) and change the eigenvalues of the Laplacian dramatically -- e.g., deleting an edge that increases the number of connected components. The barbell graph experiment seemed contrived. Why would one except such a graph to have 8 classes? Why not 3? One for cliques, one for the chain, and one for connectors of the clique to the chain. In Section 4.2, how many roles were selected for RolX? The paper states: "Furthermore, nodes from different graphs can be embedded into the same space and their structural roles can be compared across different graphs." Experiments were not conducted to see how the competing approaches such as RolX compare with GraphWave on transfer learning tasks. Gilpin et al (KDD 2013) extended RolX to incorporate sparsity and diversity constraints on the role space and showed that their approach is superior to RolX on measuring distances. This is applicable to experiments in Figure 4. I strongly recommend running experiments that test the predictive power of the roles found by GraphWave.
iclr_2018_Byt3oJ-0W
Published as a conference paper at ICLR 2018 LEARNING LATENT PERMUTATIONS WITH GUMBEL- SINKHORN NETWORKS Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data. Learning in such models is difficult, however, because exact marginalization over these combinatorial objects is intractable. In response, this paper introduces a collection of new methods for end-to-end learning in such models that approximate discrete maximum-weight matching using the continuous Sinkhorn operator. Sinkhorn operator is attractive because it functions as a simple, easy-to-implement analog of the softmax operator. With this, we can define the Gumbel-Sinkhorn method, an extension of the Gumbel-Softmax method (Jang et al., 2016;Maddison et al., 2016) to distributions over latent matchings. We demonstrate the effectiveness of our method by outperforming competitive baselines on a range of qualitatively different tasks: sorting numbers, solving jigsaw puzzles, and identifying neural signals in worms.
Quality: The paper is built on solid theoretical grounds and supplemented by experimental demonstrations. Specifically, the justification for using the Sinkhorn operator is given by theorem 1 with proof given in the appendix. Because the theoretical limit is unachievable, the authors propose to truncate the Sinkhorn operator at level $L$. The effect of approximation for the truncation level $L$ as well as the effect of temperature $\tau$ are demonstrated nicely through figures 1 and 2(a). The paper also presents a nice probabilistic approach to permutation learning, where the doubly stochastic matrix arises from Gumbel matching distribution. Clarity: The paper has a good flow, starting out with the theoretical foundation, description of how to construct the network, followed by the probabilistic formulation. However, I found some of the notation used to be a bit confusing. 1. The notation $l$ appears in Section 2 to denote the number of iterations of Sinkhorn operator. In Section 3, the notation $l$ appears as $g_l$, where in this case, it refers to the layers in the neural network. This led me to believe that there is one Sinkhorn operator for each layer of neural network. But after reading the paper a few times, it seemed to me that the Sinkhorn operator is used only at the end, just before the final output step (the part where it says the truncation level was set to $L=20$ for all of the experiments confirmed this). If I'm correct in my understanding, perhaps different notation need to be used for the layers in the NN and the Sinkhorn operator. Additionally, it would have been nice to see a figure of the entire network architecture, at least for one of the applications considered in the paper. 2. The distinction between $g$ and $g_l$ was also a bit unclear. Because the input to $M$ (and $S$) is a square matrix, the function $g$ seems to be carrying out the task of preparing the final output of the neural network into the input formate accepted by the Sinkhorn operator. However, $g$ is stated as "the output of the computations involving $g_l$". I found this statement to be a bit unclear and did not really describe what $g$ does; of course my understanding may be incorrect so a clarification on this statement would be helpful. Originality: I think there is enough novelty to warrant publication. The paper does build on a set of previous works, in particular Sinkhorn operator, which achieves continuous relaxation for permutation valued variables. However, the paper proposes how this operator can be used with standard neural network architectures for learning permutation valued latent variable. The probabilistic approach also seems novel. The applications are interesting, in particular, it is always nice to see a machine learning method applied to a unique application; in this case from computational neuroscience. Other comments: 1. What are the differences between this paper and the paper by Adams and Zemel (2011)? Adams and Zemel also seems to propose Sinkhorn operator for neural network. Although they focus only on the document ranking problem, it would be good to hear the authors' view on what differentiates their work from Adams and Zemel. 2. As pointed out in the paper, there is a concurrent work: DeepPermNet. Few comments regarding the difference between their work and this work would also be helpful as well. Significance: The Sinkhorn network proposed in the paper is useful as demonstrated in the experiments. The methodology appears to be straight forward to implement using the existing software libraries, which should help increase its usability. The significance of the paper can greatly improve if the methodology is applied to other popular machine learning applications such as document ranking, image matching, DNA sequence alignment, and etc. I wonder how difficult it is to extend this methodology to bipartite matching problem with uneven number of objects in each partition, which is the case for document ranking. And for problems such as image matching (e.g., matching landmark points), where each point is associated with a feature (e.g., SIFT), how would one formulate such problem in this setting?
iclr_2018_HkTEFfZRb
Published as a conference paper at ICLR 2018 ATTACKING BINARIZED NEURAL NETWORKS Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.
This paper starts by gently going over the concept of adversarial attacks on neural networks (black box vs white box, reactive vs proactive, transfer of attacks, linearity hypothesis), as well as low-precision nets and their deployement advantages. Adversarial examples are introduced as a norm-measurable deviation from natural inputs to a system. We are reminded of adversarial training, and of the fact that binarized nets are highly non linear due to the nature of their weights and activations. This paper then proposes to examine the robustness of binarized neural networks to adversarial attacks on MNIST and CIFAR-10. The quantization scheme used here is v32 Conv2D -> ReLU -> BNorm -> sign -> bit Conv2D -> ReLU -> Scalar -> BNorm -> sign, but sign is really done with the stochastic quantization method of Courbariaux et al, even at test time (in order to make it more robust). What the experimental results show: - High capacity BNNs are usually more robust to white-box attacks than normal networks, probably because the gradient information that an adversary would use becomes very poor as training progresses. - BNNs are harder to properly train with adversarial examples because of the polarized weight distribution that they induce - Against black-box attacks, it seems there is little difference between NNs and BNNs. Some comments and questions: - In figure 1 I'm not sure what "Scalar" refers to, and it is not explained in the paper (nor could I find it explained in Papernot et al 2017a). - Do you adopt the "Shift based Batch Normalizing Transform" of Courbariaux et al? If not, why? - It might be worth at least _quickly_ explaining what the 'Carlini-Wagner L2 from CleverHans' is rather than simply offering a citation with no explanation. Idem for 'smooth substitute model black-box misclassificiation attack'. We often assume our readers know most of what we know, but I find this is often not the case and can discourage the many newcomers of our field. - "Running these attacks to 1000 iterations [...], therefore we believe this targeted attack represents a fairly substantial level of effort on behalf of the adversary." while true for us researchers, computational difficulty will not be a criterion to stop for state actor or multinational tech companies, unless it can be proven that e.g. the number of iterations needs to grow exponentially (or in some other unreasonable way) in order to get reliable attacks. - "MLP with binary units 3 better", 'as in Fig.' is missing before '3', or something of the sort. - You say "We postpone a formal explanation of this outlier for the discussion." but really you're explaining it in the next paragraph (unless there's also another explanation I'm missing). Training BNNs with adversarial examples is hard. - You compare stochastic BNNs with deterministic NNs, but not with stochastic NNs. What do you think would happen? Some of arguments that you make in favour of BNNs could also maybe be applied to stochastic NNs. My opinions on this paper: - Novelty: it certainly seems to be the first time someone has tackled BNNs and adversarial examples - Relevance: BNNs can be a huge deal when deploying applications, it makes sense to study their vulnerabilities - Ease of understanding: To me the paper was mostly easy to understand, yet, considering there is no page limit in this conference, I would have buffed up the appendix, e.g. to include more details about the attacks used and how various hyperparameters affect things. - Clarity: I feel like some details are lacking that would hinder reproducing and extending the work presented here. Mostly, it isn't always clear why the chosen prodecures and hyperparameters were chosen (wrt the model being a BNN) - Method: I'm concerned by the use of MNIST to study such a problem. MNIST is almost linearly separable, has few examples, and given the current computational landscape, much better alternatives are available (SVHN for example if you wish to stay in the digits domain). Concerning black-box attacks, it seems that BNNs less beneficial in a way; trying more types of attacks and/or delving a bit deeping into that would have been nice. The CIFAR-10 results are barely discussed. Overall I think this paper is interesting and relevant to ICLR. It could have stronger results both in terms of the datasets used and the variety of attacks tested, as well as some more details concerning how to perform adversarial training with BNNs (or why that's not a good idea).
iclr_2018_BkeqO7x0-
Published as a conference paper at ICLR 2018 UNSUPERVISED CIPHER CRACKING USING DISCRETE GANS This work details CipherGAN, an architecture inspired by CycleGAN used for inferring the underlying cipher mapping given banks of unpaired ciphertext and plaintext. We demonstrate that CipherGAN is capable of cracking language data enciphered using shift and Vigenère ciphers to a high degree of fidelity and for vocabularies much larger than previously achieved. We present how CycleGAN can be made compatible with discrete data and train in a stable way. We then prove that the technique used in CipherGAN avoids the common problem of uninformative discrimination associated with GANs applied to discrete data.
SUMMARY The paper considers the problem of using cycle GANs to decipher text encrypted with historical ciphers. Also it presents some theory to address the problem that discriminating between the discrete data and continuous prediction is too simple. The model proposed is a variant of the cycle GAN in which in addition embeddings helping the Generator are learned for all the values of the discrete variables. The log loss of the GAN is replaced by a quadratic loss and a regularization of the Jacobian of the discriminator. Experiments show that the method is very effective. REVIEW The paper considers an interesting and fairly original problem and the overall discussion of ciphers is quite nice. Unfortunately, my understanding is that the theory proposed in section 2 does not correspond to the scheme used in the experiments (contrarily to what the conclusion suggest and contrarily to what the discussion of the end of section 3, which says that using embedding is assumed to have an equivalent effect to using the methodology considered in the theoretical part). Another important concern is with the proof: there seems to be an unmotivated additional assumption that appears in the middle of the proof of Proposition 1 + some steps need to be clarified (see comment 16 below). The experiments do not have any simple baseline, which is somewhat unfortunate. DETAILED COMMENTS: 1- The paper makes a few bold and debatable statements: line 9 of section 1 "Such hand-crafted features have fallen out of favor (Goodfellow et al., 2016) as a result of their demonstrated inferiority to features learned directly from data in end-to-end learning frameworks such as neural networks" This is certainly an overstatement and although it might be true for specific types of inputs it is not universally true, most deep architectures rely on a human-in-the-loop and there are number of areas where human crafted feature are arguably still relevant, if only to specify what is the input of a deep network: there are many domains where the notion of raw data does not make sense, and, when it does, it is usually associated with a sensing device that has been designed by a human and which implicitly imposes what the data is based on human expertise. 2- In the last paragraph of the introduction, the paper says that previous work has only worked on vocabularies of 26 characters while the current paper tackles word level ciphers with 200 words. But, isn't this just a matter of scalability and only possible with very large amounts of text? Is it really because of an intrinsic limitation or lack of scalability of previous approaches or just because the authors of the corresponding papers did not care to present larger scale experiments? 3- The discussion at the top of page 5 is difficult to follow. What do you mean when you say "this motivates the benefits of having strong curvature globally, as opposed to linearly between etc" Which curvature are we talking about? and what how does the "as opposed to linearly" mean? Should we understand "as opposed to having curvature linearly interpolated between etc" or "as opposed to having a linear function"? Please clarify. 4- In the same paragraph: what does "a region that has not seen the Jacobian norm applied to it" mean? How is a norm applied to a region? I guess that what you mean is that the generator G might creates samples in a part of the space where the function F has not yet been learned and is essentially close to 0. Is this what you mean? 5- I do not understand why the paper introduces WGAN since in the end it does not use them but uses a quadratic loss, introduced in the first display of section 4.3. 6- The paper makes a theoretical contribution which supports replacing the sample y by a sample drawn from a region around y. But it seems that this is not used in the experiment and that the authors consider that the introduction of the embedding is a substitution for this. Indeed, in the last paragraph of section 3.1, the paper says "we make the assumption that the training of the embedding vectors approximates random sampling similar to what is described in Proposition 1". This does not make any sense to me because the embedding vectors map each y deterministically to a single point, and so the distribution on the corresponding vectors is still a fixed discrete distribution. This gives me this impression that the proposed theory does not match what is used in the experiments. (The last sentence of section 3.1, which is commenting on this and could perhaps clarify the situation is ill formed with two verbs.) 7- In the definitions: "A discriminator is said to perform uninformative discrimination" etc. -> It seems that the choice of the word uninformative would be misleading: an uninformative discrimination would be a discrimination that completely fails, while what the condition is saying it that it cannot perform perfect discrimination. I would thus suggest to call this "imperfect discrimination". 8- It seems that the same embedding is used in X space and in Y space (from equations 6 and 7). Is there any reason for that? I would seem more natural to me to introduce two different embeddings since the objects are a priori different... Actually I don't understand how the embeddings can be the same in the Vignere code case since time taken into account one one side. 9- On the 5th line after equation (7), the paper says "the embeddings... are trained to minimize L_GAN and L_cyc, meaning... and are easy to discriminate" -> This last part of the sentence seems wrong to me. The discriminator is trying to maximize L_GAN and so minimizing w.r.t. to the embedding is precisely trying to prevent to the discriminator to tell apart too easily the true elements from the estimated ones. In fact the regularization of the Jacobian that will be preventing the discriminator to vary too quickly in space is more likely to explain the fact that the discrimination is not too easy to do between the true and mapped embeddings. This might be connected to the discussion at the top of page 5. Since there are no experiments with alpha different than the default value = 10, this is difficult to assess. 10-The Vigenere cipher is explained again at the end of section 4.2 when it has already been presented in section 1.1 11- Concerning results in Table 2: I do not see why it would not be possible to compare the performance of the method with classical frequency analysis, at least for the character case. 12- At the beginning of section 4.3, the text says that the log loss was replaced with the quadratic loss, but without giving any reason. Could you explain why. 13- The only comparison of results with and without embeddings is presented in the curves of figure 3, for Brown-W with a vocabulary of 200 words. In that case it helps. Could the authors report systematically results about all cases? (I guess this might however be the only hard case...) 14- It would be useful to have a brief reminder of the architecture of the neural network (right now the reader is just refered to Zhu et al., 2017): how many layers, how many convolution layers etc. The same comment applies for the way the position of the letter/word in the text appear is in encoded in a feature that is provided as input to the neural network: it would be nice if the paper could provide a few details here and be more self contained. (The fact that the engineering of the time feature can "dramatically" improve the performance of the network should be an argument to convince the authors that hand-crafted feature have not fallen out of favor completely yet...) 15- I disagree with the statement made in the conclusion that the proposed work "empirically confirms [...] that the use of continuous relaxation of discrete variable facilitates [...] and prevents [...]" because for me the proposed implementation does not use at all the theoretical idea of continuous relaxation proposed in the paper, unless there is a major point that I am missing. 16- I have two issues with the proof in the appendix a) after the first display of the last page the paper makes an additional assumption which is not announced in the statement of the theorem, which is that two specific inequality hold... Unless I am mistaken this assumption is never proven (later or earlier). Given that this inequality is just "the right inequality to get the proof go through" and given that there are no explanation for why this assumption is reasonable, to me this invalidates the proof. The step of going from G(S_y) to S_(G(y)) seems delicate... b) If we accept these inequalities, the determinant of the Jacobian (the notation is not defined) of F at (x_bar) disappears from the equations, as if it could be assumed to be greater than one. If this is indeed the case, please provide a justification of this step. 17- A way to address the issue of trivial discrimination in GANs with discrete data has been proposed in Luc, P., Couprie, C., Chintala, S., & Verbeek, J. (2016). Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408. The authors should probably reference this paper. 18- Clarification of the Jacobian regularization: in equation (3), the Jacobian computed seems to be w.r.t D composed with F while in equation (8) it is only the Jacobian of D. Which equation is the correct one? TYPOS: Proposition 1: the if-then statement is broken into two sentences separated by a full point and a carriage return. sec. 4.3 line 10 we use a cycle loss *with a regularization coefficient* lambda=1 (a piece of the sentence is missing) sec. 4.3 lines 12-13 the learning rates given are the same at startup and after "warming up"... In the appendix: 3rd line of proof of prop 1: I don' understand "countably infinite finite sequences of vectors lying in the vertices of the simplex" -> what is countable infinite here? The vertices?
iclr_2018_r1cLblgCZ
Recurrent auto-encoder model can summarise sequential data through an encoder structure into a fixed-length vector and then reconstruct into its original sequential form through the decoder structure. The summarised information can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixedlength vector can therefore represent features only in the selected dimensions. In addition, we propose using rolling fixed window approach to generate samples. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed through additional visualisation and unsupervised clustering techniques. This proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose where clusters of the vector representations can be used to reflect the operating states of selected aspects of the industrial system.
The paper describes a sequence to sequence auto-encoder model which is used to learn sequence representations. The authors show that for their application, better performance is obtained when the network is only trained to reconstruct a subset of the data measurements. The paper also presents some visualizations the similarity structure of the learned representations and proposes a window-based method for processing the data. According to the paper, the experiments are done using a data set which is obtained from measurements of an industrial production process. Figure 2 indicates that reconstructing fewer dimensions of this dataset leads to lower MSE scores. I don’t see how this is showing anything besides the obvious fact that reconstructing fewer dimensions is an easier task than reconstructing all of them. The only conclusions I can draw from the visual analysis is that the context vectors are more similar to each other when they are obtained from time steps in the data stream which are close to each other. Since the paper doesn’t describe much about the privately owned data at all, there is no possibility to replicate the work. The paper doesn’t frame the work in prior research at all and the six papers it cites are only referred to in the context of describing the architecture. I found it very hard to distil what the main contribution of this work was according to the paper. There were also not many details about the precise architecture used. It is implied that GRU networks and were used but the text doesn’t actually state this explicitly. By saying so little about the data that was used, it was also not clear what the temporal correlations of the context vectors are supposed to tell us. The paper describes how existing methods are applied to a specific data set. The benefit of only reconstructing a subset of the input dimensions seems very data specific to me and I find it hard to consider this a novel idea by itself. Presenting sequential data in a windowed format is a standard procedure and not a new idea either. All in all I don't think that the paper presents any new ideas or interesting results. Pros: * The visualizations look nice. Cons: * It is not clear what the main contribution is. * Very little information about the data. * No clear experiments from which conclusions can be drawn. * No new ideas. * Not well rooted in prior work.
iclr_2018_rJWrK9lAb
Generative Adversarial Networks (GANs) learn a generative model by playing an adversarial game between a generator and an auxiliary discriminator, which classifies data samples vs. generated ones. However, it does not explicitly model feature co-occurrences in samples. In this paper, we propose a novel Autoregressive Generative Adversarial Network (ARGAN), that models the latent distribution of data using an autoregressive model, rather than relying on binary classification of samples into data/generated categories. In this way, feature co-occurrences in samples can be more efficiently captured. Our model was evaluated on two widely used datasets: CIFAR-10 and STL-10. Its performance is competitive with respect to other GAN models both quantitatively and qualitatively.
This paper proposes an alternative GAN formulation that replaces the standard binary classification task in the discriminator with a autoregressive model that attempts to capture discriminative feature dependencies on the true data samples. Summary assessment: The paper presents a novel perspective on GANs and an interesting conjecture regarding the failure of GANs to capture global consistency. However the experiments do not directly support this conjecture. In addition, both qualitative and quantitative results to not provide significant evidence of the value of this technique over and above the establish methods in the literature. The central motivation of the method proposed in the paper, is a conjecture that the lack of global consistency in GAN-generated samples is due to the binary classification formulation of the discriminator. While this is an interesting conjecture, I am somewhat unconvinced that this is indeed the cause of the problem. First, I would argue that other high-performing auto-regressive models such as PixelRNN and PixelCNN also seem to lack global consistency. This observation would seem to violate this conjecture. More importantly, the paper does not show any direct empirical evidence in support of this conjecture. The authors make a very interesting observation in their description of the proposed approach. In discussing an initial variant of the model (Eqns. (5) and (6) and text immediately below), the authors state that attempting to maximize the negative log likelihood of the auto-regressive modelling the generated samples results in unstable training. I would like to see more discussion of this point as it could have some bearing on the difficulty of GAN to model sequential data in general. Does the failure occurs because the auto-regressive discriminator is able to "overfit" the generated samples? As a result of the observed failure of the formulation given in Eqns. (5) and (6), the authors propose an alternative formulation that explicitly removes the negative likelihood maximization for generated samples. As a result the only objective for the auto-regressive model is an attempt to maximize the log-likelihood of the true data. The authors suggest that this should be sufficient to provide a reliable training signal for the generator. It would be useful if the authors showed a representation of these features (perhaps via T-SNE) for both true data and generated samples. Empirical results: The authors experiments show samples (qualitative comparison) and inception scores (quantitative comparison) for 3 variants of the proposed model and compare these to methods in the literature. The comparisons show the proposed model preforms well, but does not exceed the performance of many of the existing methods in the literature. Also, I fail to observe significantly more global consistency for these samples compared to samples of other SOTA GAN models in the literature. Again, there is no attempt made by the authors to make this direct comparison of global consistency either qualitatively or quantitatively. Minor comment: I did not see where PARGAN was defined. Was this the combination of the Auto-regressive GAN with Patch-GAN?
iclr_2018_HJ1HFlZAb
Generative networks are known to be difficult to assess. Recent works on generative models, especially on generative adversarial networks, produce nice samples of varied categories of images. But the validation of their quality is highly dependent on the method used. A good generator should generate data which contain meaningful and varied information and that fit the distribution of a dataset. This paper presents a new method to assess a generator. Our approach is based on training a classifier with a mixture of real and generated samples. We train a generative model over a labeled training set, then we use this generative model to sample new data points that we mix with the original training data. This mixture of real and generated data is thus used to train a classifier which is afterwards tested on a given labeled test dataset. We compare this result with the score of the same classifier trained on the real training data mixed with noise. By computing the classifier's accuracy with different ratios of samples from both distributions (real and generated) we are able to estimate if the generator successfully fits and is able to generalize the distribution of the dataset. Our experiments compare different generators from the VAE and GAN framework on MNIST and fashion MNIST dataset.
The authors propose to evaluate how well generative models fit the training set by analysing their data augmentation capacity, namely the benefit brought by training classifiers on mixtures of real/generated data, compared to training on real data only. Despite the the idea of exploiting generative models to perform data augmentation is interesting, using it as an evaluation metric does not constitute an innovative enough contribution. In addition, there is a fundamental matter which the paper does not address: when evaluating a generative model, one should always ask himself what purpose the data is generated for. If the aim is to have realistic samples, a visual turing test is probably the best metric. If instead the purpose is to exploit the generated data for classification, well, in this case an evaluation of the impact of artificial data over training is a good option. PROS: The idea is interesting. CONS: 1. The authors did not relate the proposed evaluation metric to other metrics cited (e.g., the inception score, or a visual turing test, as discussed in the introduction). It would be interesting to understand how the different metrics relate. Moreover, the new metric is introduced with the following motivation “[visual Turing test and Inception Score] do not indicate if the generator collapses to a particular mode of the data distribution”. The mode collapse issue is never discussed elsewhere in the paper. 2. Only two datasets were considered, both extremely simple: generating MNIST digits is nearly a toy task nowadays. Different works on GANs make use of CIFAR-10 and SVHN, since they entail more variability: those two could be a good start. 3. The authors should clarify if the method is specifically designed for GANs and VAEs. If not, section 2.1 should contain several other works (as in Table 1). 4. One of the main statements of the paper “Our approach imposes a high entropy on P(Y) and gives unbiased indicator about entropy of both P(Y|X) and P(X|Y)” is never proved, nor discussed. 5. Equation 2 (the proposed metric) is not convincing: taking the maximum over tau implies training many models with different fractions of generated data, which is expensive. Further, how many tau’s one should evaluate? In order to evaluate a generative model one should test on the generated data only (tau=1) I believe. In the worst case, the generator experiences mode collapse and performs badly. Differently, it can memorize the training data and performs as good as the baseline model. If it does actual data augmentation, it should perform better. 6. The protocol of section 3 looks inconsistent with the aim of the work, which is to evaluate data augmentation capability of generative models. In fact, the limit of training with a fixed dataset is that the model ‘sees’ the data multiple times across epochs with the risk of memorizing. In the proposed protocol, the model ‘sees’ the generated data D_gen (which is fixed before training) multiple time across epochs. This clearly does not allow to fully evaluate the capability of the generative model to generate newer and newer samples with significant variability. Minor: Section 2.2 might be more readable it divided in two (exploitation and evaluation).
iclr_2018_H1Ww66x0-
Lifelong multitask learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance. This paper addresses continuous lifelong multitask learning by jointly re-estimating the inter-task relations (output kernel) and the per-task model parameters at each round, assuming data arrives in a streaming fashion. A new algorithm called Online Output Kernel Learning Algorithm (OOKLA) for lifelong learning setting is proposed. To avoid the memory explosion, a robust budget-limited versions of the proposed algorithm is introduced, which efficiently utilizes the relationships between the tasks to bound the total number of representative examples in the support set. In addition, a two-stage budgeted scheme for efficiently tackling the task-specific budget constraints in lifelong learning is proposed. Empirical results over three datasets indicate superior AUC performance for OOKLA and its budget-limited cousins over strong baselines.
CONTRIBUTION The main contribution of the paper is not clearly stated. To the reviewer, It seems “life-long learning” is the same as “online learning”. However, the whole paper does not define what “life-long learning” is. The limited budget scheme is well established in the literature. 1. J. Hu, H. Yang, I. King, M. R. Lyu, and A. M.-C. So. Kernelized online imbalanced learning with fixed budgets. In AAAI, Austin Texas, USA, Jan. 25-30 2015. 
 2. Y. Engel, S. Mannor, and R. Meir. The kernel recursive least-squares algorithm. IEEE Transactions on Signal Processing, 52(8):2275–2285, 2004. It is not clear what the new proposal in the paper. WRITING QUALITY The paper is not well written in a good shape. Many meanings of the equations are not stated clearly, e.g., $phi$ in eq. (7). Furthermore, the equation in algorithm 2 is not well formatted. DETAILED COMMENTS 1. The mapping function $phi$ appears in Eq. (1) without definition. 2. The last equation in pp. 3 defines the decision function f by an inner product. In the equation, the notation x_t and i_t is not clearly defined. More seriously, a comma is missed in the definition of the inner product. 3. Some equations are labeled but never referenced, e.g., Eq. (4). 4. The physical meaning of Eq.(7) is unclear. However, this equation is the key proposal of the paper. For example, what is the output of the Eq. (7)? What is the main objective of Eq. (7)? Moreover, what support vectors should be removed by optimizing Eq. (7)? One main issue is that the notation $phi$ is not clearly defined. The computation of f-y_r\phi(s_r) makes it hard to understand. Especially, the dimension of $phi$ in Eq.(7) is unknown. ABOUT EXPERIMENTS 1. It is unclear how to tune the hyperparameters. 2. In Table 1, the results only report the standard deviation of AUC. No standard deviations of nSV and Time are reported.
iclr_2018_BJRxfZbAW
One important aspect of generalization in machine learning involves reasoning about previously seen data in new settings. Such reasoning requires learning disentangled representations of data which are interpretable in isolation, but can also be combined in a new, unseen scenario. To this end, we introduce the contextaware learner, a model based on the variational autoencoding framework, which can learn such representations across data sets exhibiting a number of distinct contexts. Moreover, it is successfully able to combine these representations to generate data not seen at training time. The model enjoys an exponential increase in representational ability for a linear increase in context count. We demonstrate that the theory readily extends to a meta-learning setting such as this, and describe a fully unsupervised model in complete generality. Finally, we validate our approach using an adaptation with weak supervision.
The authors propose an extension to the Neural Statistician which can model contexts with multiple partially overlapping features. This model can explain datasets by taking into account covariate structure needed to explain away factors of variation and it can also share this structure partially between datasets. A particularly interesting aspect of this model is the fact that it can learn these context c as features conditioned on meta-context a, which leads to a disentangled representation. This is also not dissimilar to ideas used in 'Bayesian Representation Learning With Oracle Constraints' Karaletsos et al 2016 where similar contextual features c are learned to disentangle representations over observations and implicit supervision. The authors provide a clean variational inference algorithm to learn their model. However, a key problem is the following: the nature of the discrete variables being used makes them hard to be inferred with variational inference. The authors mention categorical reparametrization as their trick of choice, but do not go into empirical details int heir experiments regarding the success of this approach. In fact, it would be interesting to study which level of these variables could be analytically collapsed (such as done in the Semi-Supervised learning work by Kingma et al 2014) and which ones can be sampled effectively using a form of reparametrization. This also touches on the main criticism of the paper: While the model technically makes sense and is cleanly described and derived, the empirical evaluation is on the weak side and the rich properties of the model are not really shown off. It would be interesting if the authors could consider adding a more illustrative experiment and some more empirical results regarding inference in this model and the marginal structures that can be learned with this model in controlled toy settings. Can the model recover richer structure that was imposed during data generation? How limiting is the learning of a? How does the likelihood of the model behave under the circumstances? The experiments do not really convey how well this all will work in practice.
iclr_2018_HJw8fAgA-
A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed models that learn predictive and compact state representations, also called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment (ALE) from raw pixels. Furthermore, RL agents that use Monte-Carlo rollouts of these models as features for decision making outperform strong model-free baselines on the game MS PACMAN, demonstrating the benefits of planning using learned dynamic state abstractions.
Summary: This paper studies how to learn (hidden)-state-space models of environment dynamics, and integrate them with Imagination-Augmented Agents (I2A). The paper considers single-agent problems and tests on Ms Pacman etc. There are several variations of the hidden-state space [ds]SSM model: using det/stochastic latent variables + using det/stochastic decoders. In the stochastic case, learning is done using variational methods. [ds]SSM is integrated with I2A, which generates rollouts of future states, based on the inferred hidden states from the d/sSSM-VAE model. The rollouts are then fed into the agent's policy / value function. Main results seem to be: 1. Experiments on learning the forward model, show that latent forward models work better and faster than naive AR models on several Atari games, and better than fully model-free baselines. 2. I2A agents with latent codes work better than model-free models or I2A from pixels. Deterministic latent models seem to work better than stochastic ones. Pro: - Relatively straightforward idea: learn the forward model on hidden states, rather than raw states. - Writing is clear, although a bit dense in places. Con: - Paper only shows training curves for MS Pacman. What about the other games from Table 1? - The paper lacks any visualization of the latent codes. What do they represent? Can we e.g. learn a raw-state predictor from the latent codes? - Are the latent codes relevant in the stochastic model? See e.g. the discussion in "Variational Lossy Autoencoder" (Chen et al. 2016) - Experiments are not complete (e.g. for AR, as noted in the paper). - The games used are fairly reactive (i.e. do not require significant long-term planning), and so the sequential hidden-state-space model does not have to capture long-term dependencies. It would be nice to see how this technique fares on Montezuma's revenge, for instance. Overall: The paper proposes a simple idea that seems to work well on reactive 1-agent games. However, the paper could give more insights into *how* this works: e.g. a better qualitative inspection of the learned latent model, and how existing questions surrounding sequential stochastic model affect the proposed method. Also, not all baseline experiments are done, and the impact on training is only evaluated on 1 game. Detailed: -
iclr_2018_rk9kKMZ0-
Determining the optimal order in which data examples are presented to Deep Neural Networks during training is a non-trivial problem. However, choosing a non-trivial scheduling method may drastically improve convergence. In this paper, we propose a Self-Paced Learning (SPL)-fused Deep Metric Learning (DML) framework, which we call Learning Embeddings for Adaptive Pace (LEAP). Our method parameterizes mini-batches dynamically based on the easiness and true diverseness of the sample within a salient feature representation space. In LEAP, we train an embedding Convolutional Neural Network (CNN) to learn an expressive representation space by adaptive density discrimination using the Magnet Loss. The student CNN classifier dynamically selects samples to form a minibatch based on the easiness from cross-entropy losses and true diverseness of examples from the representation space sculpted by the embedding CNN. We evaluate LEAP using deep CNN architectures for the task of supervised image classification on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and SVHN. We show that the LEAP framework converges faster with respect to the number of mini-batch updates required to achieve a comparable or better test performance on each of the datasets.
The authors purpose a method for creating mini batches for a student network by using a second learned representation space to dynamically selecting examples by their 'easiness and true diverseness'. The framework is detailed and results on MNIST, cifar10 and fashion-MNIST are presented. The work presented is novel but there are some notable omissions: - there are no specific numbers presented to back up the improvement claims; graphs are presented but not specific numeric results - there is limited discussion of the computational cost of the framework presented - there is no comparison to a baseline in which the additional learning cycles used for learning the embedding are used for training the student model. - only small data sets are evaluated. This is unfortunate because if there are to be large gains from this approach, it seems that they are more likely to be found in the domain of large scale problems, than toy data sets like mnist. **edit In light of the changes made, and in particular the performance gains achieved on CIFAR-100, i have increased my ratting from a 4 to a 6
iclr_2018_SyoDInJ0-
Published as a conference paper at ICLR 2018 REINFORCEMENT LEARNING ALGORITHM SELECTION This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning (RL). The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return. The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection. Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations. ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations. ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS. SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin.
The paper considers the problem of online selection of RL algorithms. An algorithm selection (AS) strategy called ESBAS is proposed. It works in a sequence of epochs of doubling length, in the following way: the algorithm selection is based on a UCB strategy, and the parameters of the algorithms are not updated within each epoch (in order that the returns obtained by following an algorithm be iid). This selection allows ESBAS to select a sequence of algorithms within an epoch to generate a return almost as high as the return of the best algorithm, if no learning were made. This weak notion of regret is captured by the short-sighted pseudo regret. Now a bound on the global regret is much harder to obtain because there is no way of comparing, without additional assumption, the performance of a sequence of algorithms to the best algorithm had this one been used to generate all trajectories from the beginning. Here it is assumed that all algorithms learn off-policy. However this is not sufficient, since learning off-policy does not mean that an algorithm is indifferent to the behavior policy that has generated the data. Indeed even for the most basic off-policy algorithms, such as Q-learning, the way data have been collected is extremely important, and collecting transitions using that algorithm (such as epsilon-greedy) is certainly better than following an arbitrary policy (such as uniformly randomly, or following an even poorer policy which would not explore at all). However the authors seem to state an equivalence between learning off-policy and fairness of learning (defined in Assumption 3). For example in their conclusion they mention “Fairness of algorithm evaluation is granted by the fact that the RL algorithms learn off-policy”. This is not correct. I believe the main assumption made in this paper is the Assumption 3 (and not that the algorithms learn off-policy) and this should be dissociated from the off-policy learning. This fairness assumption is very a strong assumption that does not seem to be satisfied by any algorithm that I can think of. Indeed, consider D being the data generated by following algorithm alpha, and let D’ be the data generated by following algorithm alpha’. Then it makes sense that the performance of algorithm alpha is better when trained on D rather than D’, and alpha’ is better when trained on D’ than on D. This contradicts the fairness assumption. This being said, I believe the merit of this paper is to make explicit the actual assumptions required to be able to derive a bound on the global regret. So although the fairness assumption is unrealistic, it has the benefit of existing… So in the end I liked the paper because the authors tried to address this difficult problem of algorithmic selection for RL the best way they could. Maybe future work will do better but at least this a first step in an interesting direction. Now, I would have liked a comparison with other algorithms for algorithm selection, like: - explore-then-exploit, where a fraction of T is used to try each algorithm uniformly, then the best one is selected and played for the rest of the rounds. - Algorithms that have been designed for curriculum learning, such as some described in [Graves et al., Automated Curriculum Learning for Neural Networks, 2017], where a proxy for learning progress is used to estimate how much an algorithm can learn from data. Other comments: - Table 1 is really incomprehensible. Even after reading the Appendix B, I had a hard time understanding these results. - I would suggest adding the fairness assumption in the main text and discussing it, as I believe this is crucial component for the understanding of how the global regret can be controlled. - you may want to include references on restless bandits in Section 2.4, as this is very related to AS of RL algorithms (arms follow Markov processes). - The reference [Best arm identification in multi-armed bandits] is missing an author.
iclr_2018_rJhR_pxCZ
As deep learning-based classifiers are increasingly adopted in real-world applications, the importance of understanding how a particular label is chosen grows. Single decision trees are an example of a simple, interpretable classifier, but are unsuitable for use with complex, high-dimensional data. On the other hand, the variational autoencoder (VAE) is designed to learn a factored, low-dimensional representation of data, but typically encodes high-likelihood data in an intrinsically non-separable way. We introduce the differentiable decision tree (DDT) as a modular component of deep networks and a simple, differentiable loss function that allows for end-to-end optimization of a deep network to compress high-dimensional data for classification by a single decision tree. We also explore the power of labeled data in a supervised VAE (SVAE) with a Gaussian mixture prior, which leverages label information to produce a high-quality generative model with improved bounds on log-likelihood. We combine the SVAE with the DDT to get our classifier+VAE (C+VAE), which is competitive in both classification error and log-likelihood, despite optimizing both simultaneously and using a very simple encoder/decoder architecture.
Summary This paper proposes a hybrid model (C+VAE)---a variational autoencoder (VAE) composed with a differentiable decision tree (DDT)---and an accompanying training scheme. Firstly, the prior is specified as a mixture distribution with one component per class (SVAE). During training, the ELBO’s KL term uses the component that corresponds to the known label. Secondly, the DDT’s leaves are parametrized with the encoder distribution q(z|x), and thus gradient information flows back through the DDT into the posterior approximations in order to make them more discriminative. Lastly, the VAE and DDT are trained together by alternating optimization of each component (plus a ridge penalty on the decoder means). Experiments are performed on MNIST, demonstrating tree classification performance, (supervised) neg. log likelihood performance, and latent space interpretability via the DDT. Evaluation Pros: Giving the VAE discriminative capabilities is an interesting line of research, and this paper provides another take on tree-based VAEs, which are challenging to define given the discrete nature of the former and continuous nature of the latter. Thus, I applaud the authors for combining the two in a way that admits efficient training. Moreover, I like the qualitative experiment (Figure 2) in which the tree is used to vary a latent dimension to change the digit’s class. I can see this being used for dataset augmentation or adversarial example generation, for instance. Cons: An indefensible flaw in the work is that the model is evaluated on only MNIST. As there is no strong theory in the paper, this limited experimental evaluation is reason enough for rejection. Yet, moreover, the negative log likelihood comparison (Table 2) is not an informative comparison, as it speaks only to the power of adding supervision. Lastly, I do not think the interpretability provided by the decision tree is as great as the authors seem to claim. Decision trees provide rich and interpretable structure only when each input feature has clear semantics. However, in this case, the latent space is being used as input to the tree. As the decision tree, then, is merely learning hard, class-based partitioning rules for the latent space, I do not see how the tree is representing anything especially revealing. Taking Figure 2 as an example (which I do like the end result of), I could generate similar results with a black-box classifier by using gradients to perturb the latent ‘4’ mean into a latent ‘7’ mean (a la DeepDream). I could then identify the influential dimension(s) by taking the largest absolute values in the gradient vector. Maybe there is another use case in which a decision tree is superior; I’m just saying Section 4.3 doesn’t convince me to the extent that was promised earlier in the paper (and by the title). Comment: It's easier to make a latent variable model interpretable when the latent variables are given clear semantics in the model definition, in my opinion. Otherwise, the semantics of the latent space become too entangled. Could you, somehow, force the tree to encode an identifiable attribute at each node, which would then force that attribute to be encoded in a certain dimension of latent space?
iclr_2018_S1XolQbRW
MODEL COMPRESSION VIA DISTILLATION AND QUANTIZATION Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger networks, called "teachers," into compressed "student" networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher network, into the training of a smaller student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to state-of-the-art full-precision teacher models, while providing up to order of magnitude compression, and inference speedup that is almost linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.
The paper proposes to combine two approaches to compress deep neural networks - distillation and quantization. The authors proposed two methods, one largely relying on the distillation loss idea then followed by a quantization step, and another one that also learns the location of the quantization points. Somewhat surprisingly, nobody has combined the two approaches before, which makes this paper interesting. Experiments show that both methods work well in compressing large deep neural network models for applications where resources are limited, like on mobile devices. Overall I am mostly OK with this paper but not impressed by it. Detailed comments below. 1. Quantizing with respect to the distillation loss seems to do better than with the normal loss - this needs more discussion. 2. The idea of using the gradient with respect to the quantization points to learn them is interesting but not entirely new (see, e.g., "Matrix Recovery from Quantized and Corrupted Measurements", ICASSP 2014 and "OrdRec: An Ordinal Model for Predicting Personalized Item Rating Distributions", RecSys 2011, although in a different context). I also wonder if it would work better if you can also allow the weights to move a little bit (it seems to me from Algorithm 2 that you only update the quantization points). How about learning them altogether? Also this differentiable quantization method does not really depend on distillation, which is kind of confusing given the title. 3. I am a little bit confused by how the bits are redistributed in the second method, as in the end it seems to use more than the proposed number of bits shown in the table (as recognized in section 4.2). This makes the comparison a little bit unfair (especially for the CIFAR 100 case, where the "2 bits" differentiable quantization is actually using 3.23 bits). This needs more clarification. 4. The writing can be improved. For example, the concepts of "teacher" and "student" is not clear at all in the abstract - consider putting the first sentence of Section 3 in there instead. Also, the first sentence of the paper reads as "... have showed tremendous performance", which is not proper English. At the top of page 3 I found "we will restrict our attention to uniform and non-uniform quantization". What are you not restricting to, then? Slightly increased my rating after reading the rebuttal and the revision.
iclr_2018_SyyGPP0TZ
Published as a conference paper at ICLR 2018 REGULARIZING AND OPTIMIZING LSTM LANGUAGE MODELS In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-tohidden weights, as a form of recurrent regularization. Further, we introduce NTAvSGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (AvSGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our AvSGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the results is open sourced and is available at https://github.com/salesforce/ awd-lstm-lm.
This is a well-written paper that proposes regularization and optimization strategies for word-based language modeling tasks. The authors propose the use of DropConnect on the hidden-hidden connections as a regularization method, in order to take advantage of high-speed LSTM implementations via the cuDNN LSTM libraries from NVIDIA. The focus of this work is on the prevention of overfitting on the recurrent connections of the LSTM. The authors explore a variant of Average-SGD (NT-ASGD) as an optimization strategy which eliminates the need for tuning the average trigger and uses a constant learning rate. Averaging is triggered when the validation loss worsens or stagnates for a few cycles, leading to two new hyper parameters: logging interval and non-monotone interval. Other forms of well-know regularization methods were applied to the non-recurrent connections, input, output and embedding matrices. As the authors point out, all the methods used in this paper have been proposed before and theoretical convergence explained. The novelty of this work lies in its successful application to the language modeling task achieving state-of-the-art results. On the PTB task, the proposed AWD-LSTM achieves a perplexity of 57.3 vs 58.3 (Melis et al 2017) and almost the same perplexity as Melis et el. on the Wiki-Text2 task (65.8 vs 65.9). The addition of a cache model provides significant gains on both tasks. It would be useful, if authors had explored the behavior of the AWD-LSTM algorithm with respect to various hyper parameters and provided a few insights towards their choices for other large vocabulary language modeling tasks (1 million vocabulary sizes). Similarly, the choice of the average trigger and number of cycles seem arbitrary - it would have been good to see a graph over a range of values, showing their impact on the model's performance. A 3-layer LSTM has been used for the experiments - how was this choice made? What is the impact of this algorithm if the net was a 2-layer net as is typical in most large-scale LMs? Table 3 is interesting to see how the cache model helps with rare words and as such has applications in key word spotting tasks. Were the hyper parameters of the cache tuned to perform better on rare words? More details on the design of the cache model would have been useful. You state that the gains obtained using the cache model were far less than what was obtained in Graves et al 2016 - what do you attribute this to? Ablation analysis in Table 4 is very useful - in particular it shows how lack of regularization of the recurrent connections can lead to maximum degradation in performance. Most of the results in this paper have been based on one choice of various model parameters. Given the emperical nature of this work, it would have made the paper even clearer if an analysis of their choices were presented. Overall, it would be beneficial to the MLP community to see this paper accepted in the conference.
iclr_2018_Byk4My-RZ
We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we argue that it might be advantageous to use more flexible code distributions. We demonstrate how these distributions can be induced directly from the data. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.
The paper proposes, under the GAN setting, mapping real data points back to the latent space via the "generator reversal" procedure on a sample-by-sample basis (hence without the need of a shared recognition network) and then using this induced empirical distribution as the "ideal" prior targeting which yet another GAN network might be trained to produce a better prior for the original GAN. I find this idea potentially interesting but am more concerned with the poorly explained motivation as well as some technical issues in how this idea is implemented, as detailed below. 1. Actually I find the entire notion of an "ideal" prior under the GAN setting a bit strange. To start with, GAN is already training the generator G to match the induced P_G(x) (from P(z)) with P_d(x), and hence by definition, under the generator G, there should be no better prior than P(z) itself (because any change of P(z) would then induce a different P_G(x) and hence only move away from the learning target). I get it that maybe under different P(z) the difficulty of learning a good generator G can be different, and therefore one may wish to iterate between updating G (under the current P(z)) and updating P(z) (under the current G), and hopefully this process might converge to a better solution. But I feel this sounds like a new angle and not the one that is adopted by the authors in this paper. 2. I think the discussions around Eq. (1) are not well grounded. Just as you said right before presenting Eq. (1), typically the goal of learning a DGM is just to match Q_x with the true data distrubution P_x. It is **not** however to match Q(x,z) with P(x,z). And btw, don't you need to put E_z[ ... ] around the 2nd term on the r.h.s. ? 3. I find the paper mingles notions from GAN and VAE sometimes and misrepresents some of the key differences between the two. E.g. in the beginning of the 2nd paragraph in Introduction, the authors write "Generative models like GANs, VAEs and others typically define a generative model via a deterministic generative mechanism or generator ...". While I think the use of a **deterministic** generator is probably one of the unique features of GAN, and that is certainly not the case with VAE, where typically people still need to specify an explicit probabilistic generative model. And for this same reason, I find the multiple references of "a generative model P(x|z)" in this paper inaccurate and a bit misleading. 4. I'm not sure whether it makes good sense to apply an SVD decomposition to the \hat{z} vectors. It seems to me the variances \nu^2_i shall be directly estimated from \hat{z} as is. Otherwise, the reference "ideal" distribution would be modeling a **rotated** version of the \hat{z} samples, which imo only introduces unnecessary discrepancies. 5. I don't quite agree with the asserted "multi-modal structure" in Figure 2. Let's assume a 2d latent space, where each quadrant represents one MNIST digit (e.g. 1,2,3,4). You may observe a similar structure in this latent space yet still learn a good generator under even a standard 2d Gaussian prior. I guess my point is, a seemingly well-partitioned latent space doesn't bear an obvious correlation with a multi-modal distribution in it. 6. The generator reversal procedure needs to be carried out once for each data point separately, and also when the generator has been updated, which seems to be introducing a potentially significant bottleneck into the training process.
iclr_2018_ry4SNTe0-
Improved generative adversarial network (Improved GAN) is a successful method of using generative adversarial models to solve the problem of semi-supervised learning. However, it suffers from the problem of unstable training. In this paper, we found that the instability is mostly due to the vanishing gradients on the generator. To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN. The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets.
Summary of paper and review: The paper presents the instability issue of training GANs for semi-supervised learning. Then, they propose to essentially utilize a wgan for semi-supervised learning. The novelty of the paper is minor, since similar approaches have been done before. The analysis is poor, the text seems to contain mistakes, and the results don't seem to indicate any advantage or promise of the proposed algorithm. Detailed comments: - Unless I'm grossly mistaken the loss function (2) is clearly wrong. There is a cross-entropy term used by Salimans et al. clearly missing. - As well, if equation (4) is referring to feature matching, the expectation should be inside the norm and not outside (this amounts to matching random specific random fake examples to specific random real examples, an imbalanced form of MMD). - Theorem 2.1 is an almost literal rewrite of Theorem 2.4 of [1], without proper attribution. Furthermore, Theorem 2.1 is not sufficient to demonstrate existence of this issues. This is why [1] provides an extensive batch of targeted experiments to verify this assumptions. Analogous experiments are clearly missing. A detailed analysis of these assumptions and its implications are missing. - In section 3, the authors propose a minor variation of the Improved GAN approach by using a wgan on the unsupervised part of the loss. Remarkably similar algorithms (where the two discriminators are two separate heads) to this have been done before (see for example, [2], but other approaches exist after that, see for examples papers citing [2]). - Theorem 3.1 is a trivial consequence of Theorem 3 from WGAN. - The experiments leave much to be desired. It is widely known that MNIST is a bad benchmark at this point, and that no signal can be established from a minor success in this dataset. Furthermore, the results in CIFAR don't seem to bring any advantage, considering the .1% difference in accuracy is 1/100 of chance in this dataset. [1]: Arjovsky & Bottou, Towards Principled Methods for Training Generative Adversarial Networks, ICLR 2017 [2]: Mroueh & Sercu, Goel, McGan: Mean and Covariance Feature Matching GAN, ICML 2017
iclr_2018_rylSzl-R-
Published as a conference paper at ICLR 2018 ON UNIFYING DEEP GENERATIVE MODELS Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.
The authors develops a framework interpreting GAN algorithms as performing a form of variational inference on a generative model reconstructing an indicator variable of whether a sample is from the true of generative data distributions. Starting from the ‘non-saturated’ GAN loss the key result (lemma 1) shows that GANs minimizes the KL divergence between the generator(inference) distribution and a posterior distribution implicitly defined by the discriminator. I found the paper IWGAN and especially the AAVAE experiments quite interesting. However the paper is also very dense and quite hard to follow at times - In general I think the paper would benefit from moving some content (like the wake-sleep part of the paper) to the appendix and concentrating more on the key results and a few more experiments as detailed in the comments / questions below. Q1) What would happen if the KL-divergence minimizing loss proposed by Huszar (see e.g http://www.inference.vc/an-alternative-update-rule-for-generative-adversarial-networks/) was used instead of the “non-saturated” GAN loss - would the residial JSD terms in Lemma 1 cancel out then? Q2) In Lemma 1 the negative JSD term looks a bit nasty to me e.g. in addition to KL divergence the GAN loss also maximises the JSD between the data and generative distributions. This JSD term acts in a somewhat opposite direction of the KL-divergence that we are interested in minimizing. Can the authors provide some more detailed comments / analysis on these two somewhat opposed terms - I find this quite important to include given the opposed direction of the JSD versus the KL term and that the JSD is ignored in e.g. section 4.1? secondly did the authors do any experiments on the the relative sizes of these two terms? I imagine it would be possible to perform some low-dimensional toy experiments where both terms were tractable to compute numerically? Q3) I think the paper could benefit from some intuition / discussion of the posterior term q^r(x|y) in lemma 1 composed on the prior p_theta0(x) and discriminator q^r(y|x). The terms drops out nicely in math however i had a bit of a hard time wrapping my head around what minimizing the KL-divergence between this term and the inference distribution p(xIy). I know this is a kind of open ended question but i think it would greatly aid the reader in understanding the paper if more ‘guidance’ is provided instead of just writing “..by definition this is the posterior.’ Q4) In a similar vein to the above. It would be nice with some more discussion / definitions of the terms in Lemma 2. e.g what does “Here most of the components have exact correspondences (and the same definitions) in GANs and InfoGAN (see Table 1)” mean? Q5) The authors state that there is ‘strong connections’ between VAEs and GANs. I agree that both (after some assumptions) both minimize a KL-divergence (table 1) however to me it is not obvious how strong this relation is. Could the authors provide some discussion / thoughts on this topic? Overall i like this work but also feel that some aspects could be improved: My main concern is that a lot of the analysis hinges on the JSD term being insignificant, but the authors to my knowledge does but provide any prof / indications that this is actually true. Secondly I think the paper would greatly benefit from concentration on fewer topics (e.g. maybe drop the RW topic as it feels a bit like an appendix) and instead provide a more throughout discussion of the theory (lemma 1 + lemma 2) as well as some more experiments wrt JSD term.
iclr_2018_B1Yy1BxCZ
Published as a conference paper at ICLR 2018 DON'T DECAY THE LEARNING RATE, INCREASE THE BATCH SIZE It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate and scaling the batch size B ∝ . Finally, one can increase the momentum coefficient m and scale B ∝ 1/(1 − m), although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes.
## Review Summary Overall, the paper's paper core claim, that increasing batch sizes at a linear rate during training is as effective as decaying learning rates, is interesting but doesn't seem to be too surprising given other recent work in this space. The most useful part of the paper is the empirical evidence to backup this claim, which I can't easily find in previous literature. I wish the paper had explored a wider variety of dataset tasks and models to better show how well this claim generalizes, better situated the practical benefits of the approach (how much wallclock time is actually saved? how well can it be integrated into a distributed workflow?), and included some comparisons with other recent recommended ways to increase batch size over time. ## Pros / Strengths + effort to assess momentum / Adam / other modern methods + effort to compare to previous experimental setups ## Cons / Limitations - lack of wallclock measurements in experiments - only ~2 models / datasets examined, so difficult to assess generalization - lack of discussion about distributed/asynchronous SGD ## Significance Many recent previous efforts have looked at the importance of batch sizes during training, so topic is relevant to the community. Smith and Le (2017) present a differential equation model for the scale of gradients in SGD, finding a linear scaling rule proportional to eps N/B, where eps = learning rate, N = training set size, and B = batch size. Goyal et al (2017) show how to train deep models on ImageNet effectively with large (but fixed) batch sizes by using a linear scaling rule. A few recent works have directly tested increasing batch sizes during training. De et al (AISTATS 2017) have a method for gradually increasing batch sizes, as do Friedlander and Schmidt (2012). Thus, it is already reasonable to practitioners that the proposed linear scaling of batch sizes during training would be effective. While increasing batch size at the proposed linear scale is simple and seems to be effective, a careful reader will be curious how much more could be gained from the backtracking line search method proposed in De et al. ## Quality Overall, only single training runs from a random initialization are used. It would be better to take the best of many runs or to somehow show error bars, to avoid the reader wondering whether gains are due to changes in algorithm or to poor exploration due to bad initialization. This happens a lot in Sec. 5.2. Some of the experimental setting seem a bit haphazard and not very systematic. In Sec. 5.2, only two learning rate scales are tested (0.1 and 0.5). Why not examine a more thorough range of values? Why not report actual wallclock times? Of course having reduced number of parameter updates is useful, but it's difficult to tell how big of a win this could be. What about distributed SGD or asyncronous SGD (hogwild)? Small batch sizes sometimes make it easier for many machines to be working simultaneously. If we scale up to batch sizes of ~ N/10, we can only get 10x speedups in parallelization (in terms of number of parameter updates). I think there is some subtle but important discussion needed on how this framework fits into modern distributed systems for SGD. ## Clarity Overall the paper reads reasonably well. Offering a related work "feature matrix" that helps readers keep track of how previous efforts scale learning rates or minibatch sizes for specific experiments could be valueable. Right now, lots of this information is just provided in text, so it's not easy to make head-to-head comparisons. Several figure captions should be updated to clarify which model and dataset are studied. For example, when skimming Fig. 3's caption there is no such information. ## Paper Summary The paper examines the influence of batch size on the behavior of stochastic gradient descent to minimize cost functions. The central thesis is that instead of the "conventional wisdom" to fix the batch size during training and decay the learning rate, it is equally effective (in terms of training/test error reached) to gradually increase batch size during training while fixing the learning rate. These two strategies are thus "equivalent". Furthermore, using larger batches means fewer parameter updates per epoch, so training is potentially much faster. Section 2 motivates the suggested linear scaling using previous SGD analysis from Smith and Le (2017). Section 3 makes connections to previous work on finding optimal batch sizes to close the generaization gap. Section 4 extends analysis to include SGD methods with momentum. In Section 5.1, experiments training a 16-4 ResNet on CIFAR-10 compare three possible SGD schedules: * increasing batch size * decaying learning rate * hybrid (increasing batch size and decaying learning rate) Fig. 2, 3 and 4 show that across a range of SGD variants (+/- momentum, etc) these three schedules have similar error vs. epoch curves. This is the core claimed contribution: empirical evidence that these strategies are "equivalent". In Section 5.3, experiments look at Inception-ResNet-V2 on ImageNet, showing the proposed approach can reach comparable accuracies to previous work at even fewer parameter updates (2500 here, vs. ∼14000 for Goyal et al 2007)
iclr_2018_H1Xw62kRZ
Published as a conference paper at ICLR 2018 LEVERAGING GRAMMAR AND REINFORCEMENT LEARNING FOR NEURAL PROGRAM SYNTHESIS Program synthesis is the task of automatically generating a program consistent with a specification. Recent years have seen proposal of a number of neural approaches for program synthesis, many of which adopt a sequence generation paradigm similar to neural machine translation, in which sequence-to-sequence models are trained to maximize the likelihood of known reference programs. While achieving impressive results, this strategy has two key limitations. First, it ignores Program Aliasing: the fact that many different programs may satisfy a given specification (especially with incomplete specifications such as a few input-output examples). By maximizing the likelihood of only a single reference program, it penalizes many semantically correct programs, which can adversely affect the synthesizer performance. Second, this strategy overlooks the fact that programs have a strict syntax that can be efficiently checked. To address the first limitation, we perform reinforcement learning on top of a supervised model with an objective that explicitly maximizes the likelihood of generating semantically correct programs. For addressing the second limitation, we introduce a training procedure that directly maximizes the probability of generating syntactically correct programs that fulfill the specification. We show that our contributions lead to improved accuracy of the models, especially in cases where the training data is limited.
The authors consider the task of program synthesis in the Karel DSL. Their innovations are to use reinforcement learning to guide sequential generation of tokes towards a high reward output, incorporate syntax checking into the synthesis procedure to prune syntactically invalid programs. Finally they learn a model that predicts correctness of syntax in absence of a syntax checker. While the results in this paper look good, I found many aspects of the exposition difficult to follow. In section 4, the authors define objectives, but do not clearly describe how these objectives are optimized, instead relying on the read to infer from context how REINFORCE and beam search are applied. I was not able to understand whether syntactic corrected is enforce by way of the reward introduced in section 4, or by way of the conditioning introduced in section 5.1. Discussion of the experimental results coould similarly be clearer. The best method very clearly depends on the taks and the amount of available data, but I found it difficult to extract an intuition for which method works best in which setting and why. On the whole this seems like a promising paper. That said, I think the authors would need to convincingly address issues of clarity in order for this to appear. Specific comments - Figure 2 is too small - Equation 8 is confusing in that it defines a Monte Carlo estimate of the expected reward, rather than an estimator of the gradient of the expected reward (which is what REINFORCE is). - It is not clear the how beam search is carried out. In equation (10) there appear to be two problems. The first is that the index i appears twice (once in i=1..N and once in i \in 1..C), the second is that λ_r refers to an index that does not appear. More generally, beam search is normally an algorithm where at each search depth, the set of candidate paths is pruned according to some heuristic. What is the heuristic here? Is syntax checking used at each step of token generation, or something along these lines? - What is the value of the learned syntax in section 5.2? Presumaly we need a large corpus of syntax-checked training examples to learn this model, which means that, in practice, we still need to have a syntax-checker available, do we not?
iclr_2018_B14TlG-RW
Published as a conference paper at ICLR 2018 QANET: COMBINING LOCAL CONVOLUTION WITH GLOBAL SELF-ATTENTION FOR READING COMPRE- HENSION Current end-to-end machine reading and question answering (Q&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score 1 on the test set, which is significantly better than the best published F1 score of 81.8.
This paper proposes two contributions: first, applying CNNs+self-attention modules instead of LSTMs, which could result in significant speedup and good RC performance; second, enhancing the RC model training with passage paraphrases generated by a neural paraphrasing model, which could improve the RC performance marginally. Firstly, I suggest the authors rewrite the end of the introduction. The current version tends to mix everything together and makes the misleading claim. When I read the paper, I thought the speeding up mechanism could give both speed up and performance boost, and lead to the 82.2 F1. But it turns out that the above improvements are achieved with at least three different ideas: (1) the CNN+self-attention module; (2) the entire model architecture design; and (3) the data augmentation method. Secondly, none of the above three ideas are well evaluated in terms of both speedup and RC performance, and I will comment in details as follows: (1) The CNN+self-attention was mainly borrowing the idea from (Vaswani et al., 2017a) from NMT to RC. The novelty is limited but it is a good idea to speed up the RC models. However, as the authors hoped to claim that this module could contribute to both speedup and RC performance, it will be necessary to show the RC performance of the same model architecture, but replacing the CNNs with LSTMs. Only if the proposed architecture still gives better results, the claims in the introduction can be considered correct. (2) I feel that the model design is the main reason for the good overall RC performance. However, in the paper there is no motivation about why the architecture was designed like this. Moreover, the whole model architecture is only evaluated on the SQuAD dataset. As a result, it is not convincing that the system design has good generalization. If in (1) it is observed that using LSTMs in the model instead of CNNs could give on par or better results, it will be necessary to test the proposed model architecture on multiple datasets, as well as conducting more ablation tests about the model architecture itself. (3) I like the idea of data augmentation with paraphrasing. Currently, the improvement is only marginal, but there seems many other things to play with. For example, training NMT models with larger parallel corpora; training NMT models with different language pairs with English as the pivot; and better strategies to select the generated passages for data augmentation. I am looking forward to the test performance of this work on SQuAD.
iclr_2018_SkPoRg10b
We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.
The authors suggest that ideas from statistical mechanics will help to understand the "peculiar and counterintuitive generalization properties of deep neural networks." The paper's key claim (from the abstract) is that their approach "provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc." This claim is restated on p. 2, third full paragraph. I am sympathetic to the idea that ideas from statistical mechanics are relevant to modern learning theory. However, I do not find this paper at all convincing. I find the paper incoherent: I am unable to understand the argument for the central claims. On the one hand, the paper seems to be written as a "response" to Zhang et al.'s "Understanding Deep Learning Requires Rethinking Generalization", (henceforth Z): the introduction mentions Z multiple times, and the title of this work refers to Z. On the other hand, none of the issues raised by Z are (as far as I can tell) addressed in any substantial way by this paper. In somewhat more detail, this work discusses two major observations: 1. Neural nets can easily overtrain, even to random data. 2. Popular ways to regularize may or may not help. Z certainly observes 1 and arguably observes 2. (I'd argue against, see below, but it's at least arguable.) I do not see how this paper addresses either observation. Instead, what the statistical mechanics (SM) approach seems to do is explain (or predict) the existence of phase transitions, where we suddenly go from a regime of poor generalization to good generalization or vice versa. However, neither Z nor, as far as I can tell, any other reference given here, suggests that these phase transitions are frequently observed in modern deep learning. The most relevant bit from Z is Figure 1c, which suggests that as the noise level is increased (corresponding to alpha decreasing in this paper), the generalization error increases smoothly. This seems to be in direct contradiction to the predictions made by the theories presented here. If the authors wish to hold to the claim that their work "can provide a qualitative explanation of recently-observed empirical properties that are not easily-understandable from within PAC/VC theory of generalization, as it is commonly-used in ML" (p. 2), it is absolutely critical that they be more specific about which specific observations from which papers they think they are explaining. As written, I simply do not see which actual observations they think they explain. In observation 2, the authors suggest that many popular ways to implement regularization "do not substantially improve the situation". A careful reading of Z (and this was corroborated by discussion with the authors) is that Z observed that regularization with parameters commonly used in practice (or, put differently, regularization parameters that led to the highest holdout accuracy in other papers) still led to substantial overtraining on noisy data. I think it is almost certainly true (see below for more discussion) that much larger values of regularization can prevent overfitting, at the cost of underfitting. It's also worth noting that Z agrees with basically all practitioners that various regularization techniques can make an important difference to practitioners who want to minimize test error; what they don't do (at least at moderate values) is *qualitatively* destroy a network's ability to overfit to noise. It is unclear to me how this paper explains observation 2 (see below for extensive discussion). I don't actually understand the first full paragraph on p. 2 well. It is true that we can always avoid overtraining by tuning regularization parameters to get better generalization *error* (difference beween train and test) on the test data set (but possibly worse generalization accuracy); the rest of the paper seems to take the opposite side on this. A Gaussian kernel SVM with a small enough bandwidth and small enough regularization parameter can also overfit to noise. The argument needs to be sharpened here. I find the discussion of noise at the bottom of p. 2 confusing. The authors describe tau "having to do with noise in the learning process", but then suggest that "adding noise decreases the effective load." This is the first time noise is really talked about, and it seems like maybe noise in the data is about alpha, but noise in the "learning process" is about tau? This should be clarified. On p. 3, the authors refer to "the two parameters used by Z and many others." I am honestly not sure what's being referred to here. I just reread Z and I don't get it. What two parameters are used by Z? p. 3, figure. The authors should be clear about what recent (ideally widely-discussed) experimental results look anything like this figure. I found nothing in Z et al. In Appendix A.4, there is a mention of Figure 3 of Chromanska et al. 2014; that figure also seems to be totally consistent with smooth transitions and does not (to me) present any obvious evidence of a sharp phase transition. (In any case, the main paper should not rely heavily on the appendix for its main empirical evidence.) p. 3, figure 1a. What is essential in this figure? A single phase transition? That the error be very low on the r.h.s. of the phase transition (probably not that, judging from the related models in the Appendix). p. 3, figure 1b/c. What does SG stand for? As far as I can tell it's never discussed. p. 4. "Thus, an important more general insight from our approach is that --- depending strongly on details of the model, the specific details of the learning algorithm, the detailed properties of the data and their noise etc. --- going beyond worst-case bounds can lead to a rich and complex array of manners in which generalization can depend on the control parameters of the ML process." This is well-known to all practitioners. This paper does not seem to offer any specific testable explanations or predictions of any sort. I certainly agree that the study of SM models is "interesting", but what would make this valuable would be a more direct analogy, a direct explanation of some empirical phenomenon. Section 2 in general. The authors discuss a couple different types of observations: (1) "strong discontinuities in generalization performance as a function of control parameters" aka phase transitions, and (2) generalization performance can depend sensitively on details of the model, details of algorithms, implicit regularization properties, detailed properties of data and noise, etc." (1) shows up in the SM literature from the 90's discussed in Appendix A. I don't think it shows up in modern practice, and I don't think it shows up in Z. (2) is absolutely relevant to modern practitioners, but I don't see what this paper has to say about it beyond "SM literature from the 90's exhibits similar phenomena." The model introduced in Section 3 abstracts all such concerns away. Section 3. I am not super comfortable with the idea of "Claims", especially since the 3 Claims seem to be different sorts of things. I would normally think of a "Claim" as something that could be true or false, possibly with some argument for its truth. Claim 1 introduces a model (VSDL), but I wouldn't call this a claim, since nothing is actually "claimed." The subpoints of Claim 1 are arguably claims, but they're not introduced as such. I address these in turn: "Adding noise decreases an effective load alpha." The paper states "N is the effective capacity of the model trained on these data", but "effective capacity" is never defined. Certainly, if we *define* alpha = m_eff / N and *define* m_eff = m - m_rand, the (sub)claim follows, but why are those definitions good? I *think* what's going on here is hidden in the sentence "empirical results indicate that for realistic DNNs it is close to 1. Thus, the model capacity N of realistic DNNs scales with m and not m_eff.", where "it" refers to the Rademacher complexity. Well, OK, but if we agree with that, then aren't we just *assuming* the main result of Z rather than explaining it? We're basically just stating that the models can memorize the data? I don't really understand the point the last part of the paragraph is trying to make (everything after what I quoted above). "Early stopping increases an effective temperature tau." I find this plausible but don't understand the argument at all. To this reader, it's just "stuff from SM I don't understand." I think the typical ML reader of this paper won't necessarily be familiar with any of "the weights evolve according to a relaxation Langevin equation", "from the fluctuation-dissipation theorem", or the reference to annealing rate schedules. Consider either explaining this more or just appealing to SM and relegating this to an appendix. After the claim, the paper mentions that the VSDL model ignores other "knobs". This is fine for a model, but I think it's totally disingenuous to then suggest that this model explains anything about other popular ways to regularize (Observation 2 in the intro, see also my comment on Section 2). In the intro, the claim is "Other regularizations sometimes help and sometimes don't and we don't understand why" (the claim is about overfitting but it's also true for improving performance in general), which is basically true. But introducing a model which completely abstracts these things away cannot possibly explain anything about the behavior. Claim 2 is that we should consider a thermodynamic limit where model complexity grows with data (the paper says grows with the number of parameters, I assume this is a typo). I would probably call this one an "Assumption", with some arguments for the justification. I think this is one of the most interesting and important ideas in the paper, and I don't fully understand it, even after reading the appendix. I have questions. How should / could this apply to practitioners, who cannot in general hope to obtain arbitrary amounts of data? Are we assuming that any (or all) modern DNN experiments are in the asymptotic regime? Are we assuming the experiments in Z are in this regime? Is there any relevance to the fact that in an ML problem (unlike in say a spin glass, at least as far as I know) the "complexity" of the *task* is *not* increasing with the data size, so eventually one will have seen "enough" data to "saturate" the task? I'd love to know more. Claim 3 is more of an "Informal Theorem" that under the model of Claim 1 and the assumption of Claim 2, the phase diagrams of Figure 1 hold. The "proof" is a reference to SM papers. This should be clarified. Yet again, I point out that I do not know any modern large-scale NN experiments that correspond to any of the pictures in Figure 1. There's a mention of "tau = 0 or t > 0." What is the significance of tau = 0? How should an ML reader think about this? Section 3.2 suggests that Claim 3 (the existence of the 1 and 2d phase diagrams) "explain" Observations 1 and 2 from the Appendix. I simply do not see this. For Observation 1, that NNs can easily overtrain, the "argument" seems to boil down to "the system is in a phase where it cannot help but overtrain." This is hardly an explanation at all. How do we know what phase these experiments were in? How do we know these experiments were in the thermodynamic limit? For Observation 2, the authors point out that in VSDL, "the only way to prevent overfitting is to decrease the number of iterations." This seems true but vacuous: the authors introduced a model where regularization doesn't correspond to any knobs, so of course to the extent that that model explains reality, the knobs don't stop overfitting. But this feels like begging the question. If we accept the VSDL model, we'd also accept that various regularizations can't improve generalization, which goes directly against basically all practice. I guess I technically have to concede that "Given the three claims", Observation 2 follows, but Claim 1 by itself seems to be already assuming the conclusion. Minor writing issues: The authors mention at least four times that reproducing others' results is not easy (p. 1 observation 1, p. 4 first paragraph, p. 4 footnote 6, last sentence of the main text). While I think this statement is true, it is quite well-known, and I suggest that the authors may simply alienate readers by harping on it here. p. 1. "may always overtrain" is unclear. I don't know what it means. Is the claim that SOTA DNNs wll always overtrain when presented with enough data? I don't think so from the rest of the paper, but I'm not sure. I'm a little unclear what the authors mean by "generalization error" (or "generalization accuracy", which seems to only be used on p. 2). Z use "generalization error = training error - test error". Check the appendix for consistency here too. Replace "phenomenon" with "phenomena", at least twice where appropriate. p. 3, first paragraph. I think the reference to the Hopfield model should be relegated to a footnote. The text "two or more such parameter holds more generally" is confusing; is it two, or is it two or more? What will I understand differently if I use more than two parameters? The next paragraph, starting with "Given these two identifications, which are novel to this work," seems odd, since we've just seen 7+ references and a claim that they have similar parameterizations, so it's unclear what's novel. Appendix A.5. "For non-linear dynamical systems... NNs from the 80s/90s or our VSDL model or realistic DNNs today .. there is simply no reason to expect this to be true." where "this" refers to "one can always choose a value of lambda to prevent overfitting, potentially at the expense of underfitting." I don't understand, and I also think this disagrees with the first full paragraph on p. 2. Is there some thermodynamic limit argument required here? The very next bullet states that x = argmin_x f(x) + lambda g(x) can prevent overfitting with large lambda. What's different? I'm overall not clear what's being implied here. Consider a modern DNN for classification. A network with all zero weights will have some empirical loss L(0). If I minimize, for the weights of a network w, L(w) + lambda ||w||^2, I have that L(w) + lambda ||w||^2 <= L(0) (assuming I can solve the optimization), and assuming L is non-negative, lambda ||w||^2 <= L(0), or ||w||^2 <= L(0) / lambda. So for very large lambda, I can drive ||w||^2 arbitrarily close to zero. How is this importantly different from the linear case? What am I missing? p. 3. "inability not to overfit." Avoid the double negative. Intro, last paragraph. Weird section order description, with ref to Section A coming before section 4. Footnote 2. "but it can be quite limiting." More detail needed. Limiting how? Footnotes 3 and 4. The text says there are "technical" and "non-technical" reasons, but 3 and 4 both seem technical to me. Appendix A.2. "on a randomly chosen subset of X." Is it really subset? Are we picking subsets uniformly at random?
iclr_2018_r11Q2SlRW
AUTO-CONDITIONED RECURRENT NETWORKS FOR EXTENDED COMPLEX HUMAN MOTION SYNTHESIS We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.
The problem of learning auto-regressive (data-driven) human motion models that have long-term stability is of ongoing interest. Steady progress is being made on this problem, and this paper adds to that. The paper is clearly written. The specific form of training (a fixed number of self-conditioned predictions, followed by a fixed number of ground-truth conditioned steps) is interesting for simplicity and its efficacy. The biggest open question for me is how it would compare to the equally simple stochastic version proposed by the scheduled sampling approach of [Bengio et al. 2015]. PROS: The paper provides a simple solution to a problem of interest to many. CONS: It is not clear if it improves over something like scheduled sampling, which is a stochastic predecessor of the main idea introduced here. The "duration of stability" is a less interesting goal than actually matching the distribution of the input data. The need to pay attention to the distribution-mismatch problem for sequence prediction problems has been known for a while. In particular, the DAGGER (see below) and scheduled sampling algorithms (already cited) target this issue, in addition to the addition of progressively increasing amounts of noise during training (Fragkiadaki et al). Also see papers below on Professor Forcing, as well as "Learning Human Motion Models for Long-term Predictions" (concurrent work?), which uses annealing over dropout rates to achieve stable long-term predictions. DAGGER algorithm (2011): http://www.jmlr.org/proceedings/papers/v15/ross11a/ross11a.pdf "A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning" Professor Forcing (NIPS 2016) http://papers.nips.cc/paper/6099-professor-forcing-a-new-algorithm-for-training-recurrent-networks.pdf Learning Human Motion Models for Long-term Predictions (2017) https://arxiv.org/abs/1704.02827 https://www.youtube.com/watch?v=PgJ2kZR9V5w While the motions do not freeze, do the synthesized motion distributions match the actual data distributions? This is not clear, and would be relatively simple to evaluate. Is the motion generation fully deterministic? It would be useful to have probabilistic transition distributions that match those seen in the data. An interesting open issue (in motion, but also of course NLP domains) is that of how to best evaulate sequence-prediction models. The duration of "stable prediction" does not directly capture the motion quality. Figure 1: Suggest to make u != v for the purposes of clarity, so that they can be more easily distinguished. Data representation: Why not factor out the facing angle, i.e., rotation about the vertical axis, as done by Holden et al, and in a variety of previous work in general? The representation is already made translation invariant. Relatedly, in the Training section, data augmentation includes translating the sequence: "rotate and translate the sequence randomly". Why bother with the translation if the representation itself is already translation invariant? The video illustrates motions with and without "foot alignment". However, no motivation or description of "foot alignment" is given in the paper. The following comment need not be given much weight in terms of evaluation of the paper, given that the current paper does not use simulation-based methods. However, it is included for completeness. The survey of simulation-based methods for modeling human motions is not representative of the body of work in this area over the past 25 years. It may be more useful to reference a survey, such as "Interactive Character Animation Using Simulated Physics: A State‐of‐the‐Art Review" (2012) An example of recent SOTA work for modeling dynamic motions from motion capture, including many highly dynamic motions, is "Guided Learning of Control Graphs for Physics-Based Characters" (2016) More recent work includes "Learning human behaviors from motion capture by adversarial imitation", "Robust Imitation of Diverse Behaviors", and "Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning", all of which demonstrate imitation of various motion styles to various degrees. It is worthwhile acknowledging that the synthesized motions are still low quality, particular when rendered with more human-like looking models, and readily distinguishable from the original motions. In this sense, they are not comparable to the quality of results demonstrated in recent works by Holden et al. or some other recent works. However, the authors should be given credit for including some results with fully rendered characters, which much more readily exposes motion flaws. The followup work on [Lee et al 2010 "Motion Fields"] is quite relevant: "Continuous character control with low-dimensional embeddings" In terms of usefulness, being able to provide some control over the motion output is a more interesting problem than being able to generate long uncontrolled sequences. A caveat is that the methods are not applied to large datasets.
iclr_2018_rJg4YGWRb
Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fully-connected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fullyconnected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other.
The paper proposes a semi supervised learning algorithm for graph node classification. The Algorithm is inspired from Graph Neural Networks and more precisely graph convolutional NNs recently proposed by ref (Kipf et al 2016)) in the paper. These NNs alternate 2 types of layers: non linear projection and diffusion, the latter incorporates the graph relational information by constraining neighbor nodes to have close representations according to some “graph metrics”. The authors propose a model with simplified projection layers and more sophisticated diffusion ones, incorporating a simple attention mechanism. Experiments are performed on citation textual datasets. Comparisons with published results on the same datasets are presented. The paper is clear and develops interesting ideas relevant to semi-supervised graph node classification. One finding is that simple models perform as well as more complex ones in this setting where labeled data is scarce. Another one is the importance of integrating relational information for classifying nodes when it is available. The attention mechanism itself is extremely simple, and learns one parameter per diffusion layers. One parameter weights correlations between node embeddings in a diffusion layer. I understand that you tried more complex attention mechanisms, but the one finally selected is barely an attention mechanism and rather a simple “importance” weight. This is not a criticism, but this makes the title somewhat misleading. The experiments show that the proposed model is state of the art for graph node classification. The performance is on par with some other recent models according to table 2. The other tests are also interesting, but the comparison could have been extended to other models e.g. GCN. You advocate the role of the diffusion layers, and in the experiments you stack 3 to 4 such layers. It would be interesting to have indications on the compromise performance/ number of diffusion layers and on the evolution of these performances when adding such layers. The bibliography on semi-supervised learning in graphs for classification is light and should be enhanced. Overall this is an interesting paper with nice findings. The originality is however relatively limited in a field where many recent papers have been proposed, and the experiments need to be completed.
iclr_2018_Hk2aImxAb
MULTI-SCALE DENSE NETWORKS FOR RESOURCE EFFICIENT IMAGE CLASSIFICATION In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network's prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across "easier" and "harder" inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.
This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer. The multiple classifiers allow for a finer selection of the amount of computation needed for a given input image. The multi-scale representation allows for better performance at early stages of the network. Finally the dense connectivity allows to reduce the negative effect that early classifiers have on the feature representation for the following layers. A thorough evaluation on ImageNet and Cifar100 shows that the network can perform better than previous models and ensembles of previous models with a reduced amount of computation. Pros: - The presentation is clear and easy to follow. - The structure of the network is clearly justified in section 4. - The use of dense connectivity to avoid the loss of performance of using early-exit classifier is very interesting. - The evaluation in terms of anytime prediction and budgeted batch classification can represent real case scenarios. - Results are very promising, with 5x speed-ups and same or better accuracy that previous models. - The extensive experimentation shows that the proposed network is better than previous approaches under different regimes. Cons: - Results about the more efficient densenet* could be shown in the main paper Additional Comments: - Why in training you used logistic loss instead of the more common cross-entropy loss? Has this any connection with the final performance of the network? - In fig. 5 left for completeness I would like to see also results for DenseNet^MT and ResNet^MT - In fig. 5 left I cannot find the 4% and 8% higher accuracy with 0.5x10^10 to 1.0x10^10 FLOPs, as mentioned in section 5.1 anytime prediction results - How the budget in terms of Mul-Adds is actually estimated? I think that this paper present a very powerful approach to speed-up the computational cost of a CNN at test time and clearly explains some of the common trade-offs between speed and accuracy and how to improve them. The experimental evaluation is complete and accurate.
iclr_2018_SJcKhk-Ab
CAN RECURRENT NEURAL NETWORKS WARP TIME? Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use ad hoc gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues. We prove that learnable gates in a recurrent model formally provide quasiinvariance to general time transformations in the input data. We recover part of the LSTM architecture from a simple axiomatic approach. This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort. Recurrent neural networks (e.g. (Jaeger, 2002)) are a standard machine learning tool to model and represent temporal data; mathematically they amount to learning the parameters of a parameterized dynamical system so that its behavior optimizes some criterion, such as the prediction of the next data in a sequence. Handling long term dependencies in temporal data has been a classical issue in the learning of recurrent networks. Indeed, stability of a dynamical system comes at the price of exponential decay of the gradient signals used for learning, a dilemma known as the vanishing gradient problem (Pascanu et al., 2012;Hochreiter, 1991;Bengio et al., 1994). This has led to the introduction of recurrent models specifically engineered to help with such phenomena. Use of feedback connections (Hochreiter & Schmidhuber, 1997) and control of feedback weights through gating mechanisms (Gers et al., 1999) partly alleviate the vanishing gradient problem. The resulting architectures, namely long short-term memories (LSTMs (Hochreiter & Schmidhuber, 1997;Gers et al., 1999)) and gated recurrent units (GRUs (Chung et al., 2014)) have become a standard for treating sequential data. Using orthogonal weight matrices is another proposed solution to the vanishing gradient problem, thoroughly studied in (Saxe et al., 2013;Le et al., 2015;Arjovsky et al., 2016;Wisdom et al., 2016;Henaff et al., 2016). This comes with either computational overhead, or limitation in representational power. Furthermore, restricting the weight matrices to the set of orthogonal matrices makes forgetting of useless information difficult. The contribution of this paper is threefold: • We show that postulating invariance to time transformations in the data (taking invariance to time warping as an axiom) necessarily leads to a gate-like mechanism in recurrent models (Section 1). This provides a clean derivation of part of the popular LSTM and GRU architectures from first principles. In this framework, gate values appear as time contraction or time dilation coefficients, similar in spirit to the notion of time constant introduced in (Mozer, 1992). • From these insights, we provide precise prescriptions on how to initialize gate biases (Section 2) depending on the range of time dependencies to be captured. It has previously been advocated that setting the bias of the forget gate of LSTMs to 1 or 2 provides overall good performance (Gers & Schmidhuber, 2000;Jozefowicz et al., 2015). The viewpoint here explains why this is reasonable in most cases, when facing medium term dependencies, but fails when facing long to very long term dependencies. • We test the empirical benefits of the new initialization on both synthetic and real world data (Section 3). We observe substantial improvement with long-term dependencies, and slight gains or no change when short-term dependencies dominate.
tl;dr: - The paper has a really cool theoretical contribution. - The experiments do not directly test whether the theoretical insight holds in practice, but instead a derivate method is tested on various benchmarks. I must say that this paper has cleared up quite a few things for me. I have always been a skeptic wrt LSTM, since I myself did not fully understand when to prefer them over vanilla RNNs for reasons other than “they empirically work much better in many domains.” and “they are less prone to vanishing gradients”. Section 1 is a bliss: it provides a very useful candidate explanation under which conditions vanilla RNNs fail (or at least, do not efficiently generalise) in contrast to gated cells. I am sincerely happy about the write up and will point many people to it. The major problem with the paper, in my eyes, is the lack of experiments specific to test the hypothesis. Obviously, quite a bit of effort has gone into the experimental section. The focus however is comparison to the state of the art in terms of raw performance. That leaves me asking: are gated RNNs superior to vanilla RNNs if the data is warped? Well, I don’t know now. I only can say that there is reason to believe so. I *really* do encourage the authors to go back to the experiments and see if they can come up with an experiment to test the main hypothesis of the paper. E.g. one could make synthetic warpings, apply it to any data set and test if things work out as expected. Such a result would in my opinion be of much more use than the tiny increment in performance that is the main output of the paper as of now, and which will be stomped by some other trick in the months to come. It would be a shame if such a nice theoretical insight got under the carpet because of that. E.g. today we hold [Pascanu 2013] dear not because of the proposed method, but because of the theoretical analysis. Some minor points. - The authors could make use of less footnotes, and try to incorporate them into the text or appendix. - A table of results would be nice. - Some choices of the experimental section seem arbitrary, e.g. the use of optimiser and to not use clipping of gradients. In general, the evaluation of the hyper parameters is not rigorous. - “abruplty” -> “abruptly” on page 5, 2nd paragraph ### References [Pascanu 2013] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." International Conference on Machine Learning. 2013.
iclr_2018_ByrZyglCb
Published as a conference paper at ICLR 2018 ROBUSTNESS OF CLASSIFIERS TO UNIVERSAL PERTUR- BATIONS: A GEOMETRIC PERSPECTIVE Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.
The paper develops models which attempt to explain the existence of universal perturbations which fool neural networks — i.e., the existence of a single perturbation which causes a network to misclassify most inputs. The paper develops two models for the decision boundary: (a) A locally flat model in which the decision boundary is modeled with a hyperplane and the normals two the hyperplanes are assumed to lie near a low-dimensional linear subspace. (b) A locally positively curved model, in which there is a positively curved outer bound for the collection of points which are assigned a given label. The paper works out a probabilistic analysis arguing that when either of these conditions obtains, there exists a fooling perturbation which affects most of the data. The theoretical analysis in the paper is straightforward, in some sense following from the definition. The contribution of the paper is to posit these two conditions which can predict the existence of universal fooling perturbations, argue experimentally that they occur in (some) neural networks of practical interest. One challenge in assessing the experimental claims is that practical neural networks are nonsmooth; the quadratic model developed from the hessian is only valid very locally. This can be seen in some of the illustrative examples in Figure 5: there *is* a coarse-scale positive curvature, but this would not necessarily come through in a quadratic model fit using the hessian. The best experimental evidence for the authors’ perspective seems to be the fact that random perturbations from S_c misclassify more points than random perturbations constructed with the previous method. I find the topic of universal perturbations interesting, because it potentially tells us something structural (class-independent) about the decision boundaries constructed by artificial neural networks. To my knowledge, the explanation of universal perturbations in terms of positive curvature is novel. The paper would be much stronger if it provided an explanation of *why* there exists this common subspace of universal fooling perturbations, or even what it means geometrically that positive curvature obtains at every data point. Visually, these perturbations seem to have strong, oriented local high-frequency content — perhaps they cause very large responses in specific filters in the lower layers of a network, and conventional architectures are not robust to this? It would also be nice to see some visual representations of images perturbed with the new perturbations, to confirm that they remain visually similar to the original images.
iclr_2018_SyJS-OgR-
MULTI-LEVEL RESIDUAL NETWORKS FROM DYNAMI- CAL SYSTEMS VIEW Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40% with superior or on-par accuracy.
I enjoyed reading the paper. This is a very well written paper, the authors propose a method for speeding up the training time of Residual Networks based on the dynamical system view interpretation of ResNets. In general I have a positive opinion about the paper, however, I’d like to ask for some clarifications. I’m not fully convinced by the interpretation of Eq. 5: “… d is inversely proportional to the norm of the residual modules G(Yj)”. Since F(Yj) is not a constant, I think that d is inversely proportional to ||G(Yj)||/||F(Yj)||, however, in the interpretation the dependence on ||F(Yj)|| is ignored. Could the authors comment on that? Section 4. 1 “ Each cycle itself can be regarded as a training process, thus we need to reset the learning rate value at the beginning of each training cycle and anneal the learning rate during that cycle.” Is there any empirical evidence for this? What would happen if the learning rate is not reset at the beginning of each cycle? Questions with respect to dynamical systems point of view: Eq. 4 assumes small value of h. However, for ResNet there is no guarantee that the h would be small (e. g. in Appendix C the values between 0.25 and 1 are used). Would the authors be willing to comment on the importance of the value of h? In figure 1, pooling (strided convolutions) are not depicted between network stages. I have one question w.r.t. feature maps dimensionality changes inside a CNN: how does pooling (or strided convolution) fit into dynamical systems view? Table 3 and 4. I assume that the training time unit is a minute, I couldn’t find this information in the paper. Is the batch size the same for all models (100 for CIFAR and 32 for STL-10)? I understand that the models with different #Blocks have different capacity, for clarity, would it be possible to add # of parameters to each model? For multilevel method, would it be possible to show intermediate results in Table 3 and 4, e. g. at the end of cycle 1 and 2? I see these results in Figure 6, however, the plots are condensed and it is difficult to see the exact number at the end of each cycle. The citation (E, 2017) seems to be wrong, could the authors check it?
iclr_2018_BJgVaG-Ab
An obstacle that prevents the wide adoption of (deep) reinforcement learning (RL) in control systems is its need for a large number of interactions with the environment in order to master a skill. The learned skill usually generalizes poorly across domains and re-training is often necessary when presented with a new task. We present a framework that combines techniques in formal methods with hierarchical reinforcement learning (HRL). The set of techniques we provide allows for the convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards using any RL methods and is able to construct new skills from existing ones without additional learning. We evaluate the proposed methods in a simple grid world simulation as well as simulation on a Baxter robot.
The paper argues for structured task representations (in TLTL) and shows how these representations can be used to reuse learned subtasks to decrease learning time. Overall, the paper is sloppily put together, so it's a little difficult to assess the completeness of the ideas. The problem being solved is not literally the problem of decreasing the amount of data needed to learn tasks, but a reformulation of the problem that makes it unnecessary to relearn subtasks. That's a good idea, but problem reformulation is always hard to justify without returning to a higher level of abstraction to justify that there's a deeper problem that remains unchanged. The paper doesn't do a great job of making that connection. The idea of using task decomposition to create intrinsic rewards seems really interesting, but does not appear to be explored in any depth. Are there theorems to be had? Is there a connection to subtasks rewards in earlier HRL papers? The lack of completeness (definitions of tasks and robustness) also makes the paper less impactful than it could be. Detailed comments: "learn hierarchical policies" -> "learns hierarchical policies"? "n games Mnih et al. (2015)Silver et al. (2016),": The citations are a mess. Please proof read. "and is hardly reusable" -> "and are hardly reusable". "Skill composition is the idea of constructing new skills with existing skills (" -> "Skill composition is the idea of constructing new skills out of existing skills (". "to synthesis" -> "to synthesize". "set of skills are" -> "set of skills is". "automatons" -> "automata". "with low-level controllers can" -> "with low-level controllers that can". "the options policy π o is followed until β(s) > threshold": I don't think that's how options were originally defined... beta is generally defined as a termination probability. "The translation from TLTL formula FSA to" -> "The translation from TLTL formula to FSA"? "four automaton states Qφ = {q0, qf , trap}": Is it three or four? "learn a policy that satisfy" -> "learn a policy that satisfies". "HRL, We introduce the FSA augmented MDP" -> "HRL, we introduce the FSA augmented MDP.". " multiple options policy separately" -> " multiple options policies separately"? "Given flat policies πφ1 and πφ2 that satisfies " -> "Given flat policies πφ1 and πφ2 that satisfy ". "s illustrated in Figure 3 ." -> "s illustrated in Figure 2 ."? ", we cam simply" -> ", we can simply". "Figure 4 <newline> ." -> "Figure 4.". ", disagreement emerge" -> ", disagreements emerge"? The paper needs to include SOME definition of robustness, even if it just informal. As it stands, it's not even clear if larger values are better or worse. (It would seem that *more* robustness is better than less, but the text says that lower values are chosen.) "with 2 hidden layers each of 64 relu": Missing word? Or maybe a comma? "to aligns with" -> "to align with". " a set of quadratic distance function" -> " a set of quadratic distance functions". "satisfies task the specification)" -> "satisfies the task specification)". Figure 4: Tasks 6 and 7 should be defined in the text someplace. "current frame work i" -> "current framework i". " and choose to follow" -> " and chooses to follow". " this makes" -> " making". "each subpolicies" -> "each subpolicy".
iclr_2018_SkiCjzNTZ
We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks (He et al., 2015a), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken based on empirical results. The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena, such as training on random labels with zero error (Zhang et al., 2017), the information bottleneck and the phase transition out of it and the following increase in gradient variances (Shwartz-Ziv & Tishby, 2017), shattered gradients (Balduzzi et al., 2017), and many more.
In this paper, an number of very strong (even extraordinary) claims are made: * The abstract promises "a framework to understand the unprecedented performance and robustness of deep neural networks using field theory." * Page 8 states that this is "This is a first attempt to describe a neural network with a scalar quantum field theory." * Page 2 promises the use of the "Goldstone theorem" (no less) to understand phase transition in deep learning * It also claim that many "seemingly different experimental results can be explained by the presence of these zero eigenvalue weights." * Three important results are stated as "theorem", with a statement like "Deep feedforward networks learn by breaking symmetries" proven in 5 lines, with no formal mathematics. These are extraordinary claims, but when reaching page 5, one sees that the basis of these claims seems to be the Lagrangian of a simple phi-4 theory, and Fig. 1 shows the standard behaviour of the so-called mexican hat in physics, the basis of the second-order transition. Given physicists have been working on neural network for more than three or four decades, I am surprise that this would enough to solve all these problems! I tried to understand these many results, but I am afraid I cannot really understand or see them. In many case, the explanation seems to be a vague analogy. These are not without interest, and maybe there is indeed something deep in this paper, but it is so far hidden by the hype. Still, I fail to see how the fact that phase transitions and negative direction in the landscape is a new phenomena, and how it explains all the stated phenomenology. Beside, there are quite a lot of things known about the landscape of these problems Maybe I am indeed missing something, but i clearly suspect the authors are simply overselling physics results. I have been wrong many times, but I beleive that the authors should probably precise their claim, and clarify the relation between their results and both the physics AND statistics litterature, or better, with the theoretical physics litterature applied to learning, which is ---astonishing-- absent in the paper. About the content: The main problem for me is that the whole construction using field theory seems to be used to advocate for the appearence of a phase transition in neural nets and in learning. This rises three comments: (1) So we really need to use quantum field theory for this? I do not see what should be quantum here (despite the very vague remarks page 12 "WHY QUANTUM FIELD THEORY?") (2) This is not new. Phase transitions in learning in neural nets are being discussed since aboutn 40 years, see for instance all the pionnering work of Sompolinky et al. one can see for instance the nice review in https://arxiv.org/abs/1710.09553 In non aprticular order, phase transition and symmetry breaking are discussed in * "Statistical mechanics of learning from examples", Phys. Rev. A 45, 6056 – Published 1 April 1992 * "The statistical mechanics of learning a rule", Rev. Mod. Phys. 65, 499 – Published 1 April 1993 * Phase transitions in the generalization behaviour of multilayer neural networks http://iopscience.iop.org/article/10.1088/0305-4470/28/16/010/meta * Note that some of these results are now rigourous, as shown in "Phase Transitions, Optimal Errors and Optimality of Message-Passing in Generalized Linear Models", https://arxiv.org/abs/1708.03395 * The landscape of these problems has been studied quite extensivly, see for instance "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", https://arxiv.org/abs/1406.2572 (3) There is nothing particular about deep neural net and neural nets about this. Negative direction in the Hessian in learning problems appears in matrix and tensor factorizaion, where phase transition are well understood (even rigorously, see for instance, https://arxiv.org/abs/1711.05424 ) or in problems such as unsupervised learning, as e.g.: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.2174 https://journals.aps.org/pre/pdf/10.1103/PhysRevE.50.1766 Here are additional comments: PAGE 1: * "It has been discovered that the training process ceases when it goes through an information bottleneck (ShwartzZiv & Tishby, 2017)". While this paper indeed make a nice suggestion, I would not call it a discovery yet as this has never been shown on a large network. Beside, another paper in the conference is claiming exacly the opposite, see : "On the Information Bottleneck Theory of Deep Learning". This is still subject of discussion. * "In statistical terms, a quantum theory describes errors from the mean of random variables. " Last time I studied quantum theory, it was a theory that aim to explain the physical behaviours at the molecular, atomic and sub-atomic levels, usinge either on the wave function (Schrodinger) or the Matrix operatir formalism (Hesienbger) (or if you want, the path integral formalism of Feynman). It is certainly NOT a theory that describes errors from the mean of random variables. This is, i beleive, the field of "statistics" or "probability" for correlated variables. It is certianly used in physics, and heavily both in statistical physics and in quantum thoery, but this is not what the theory is about in the first place. Beside, there is little quantum in this paper, I think most of what the authors say apply to a statistical field theory ( https://en.wikipedia.org/wiki/Statistical_field_theory ) * "In the limit of a continuous sample space, the quantum theory becomes a quantum field theory." Again, what is quantum about all this? This true for a field theory, as well for continous theories of, say, mechanics, fracture, etc... PAGE 2: * "Using a scalar field theory we show that a phase transition must exist towards the end of training based on empirical results." So it is a scalar classical field theory after all. This sounds a little bit less impressive that a quantum field theory. Note that the fact that phase transition arises in learning, and in a statistical theory applied to any learning process, is an old topic, with a classical litterature. The authors might be interested by the review "The statistical mechanics of learning a rule", Rev. Mod. Phys. 65, 499 – Published 1 April 1993 PAGE 8: * "In this work we solved one of the most puzzling mysteries of deep learning by showing that deep neural networks undergo spontaneous symmetry breaking." I am afraid I fail to see what is so mysterious about this nor what the authors showed about it. In any case, gradient descent break symmetry spontaneously in many systems, including phi-4, the Ising model or (in learning problems) the community detection problem (see eg https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047). I am afraid I miss what is new there... * "This is a first attempt to describe a neural network with a scalar quantum field theory." Given there seems to be little quantum in the paper, I fail to see the relevance of the statement. Secondly, I beleive that field theory has been used, many times and in greater lenght, both for statistical and dynamical problems in neural nets, see eg. * http://iopscience.iop.org/article/10.1088/0305-4470/27/6/016/meta * https://arxiv.org/pdf/q-bio/0701042.pdf * http://www.lps.ens.fr/~derrida/PAPIERS/1987/gardner-zippelius-87.pdf * http://iopscience.iop.org/article/10.1088/0305-4470/21/1/030/meta * https://arxiv.org/pdf/cond-mat/9805073.pdf
iclr_2018_rJTGkKxAZ
One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler generation tasks. For example, generating an image at a low resolution and then learning to refine that into a high resolution image often improves results substantially. Here we explore a novel strategy for decomposing generation for complicated objects in which we first generate latent variables which describe a subset of the observed variables, and then map from these latent variables to the observed space. We show that this allows us to achieve decoupled training of complicated generative models and present both theoretical and experimental results supporting the benefit of such an approach.
This paper proposed a method called Locally Disentangled Factors for hierarchical latent variable generative model, which can be seen as a hierarchical variant of Adversarially Learned Inference (Dumoulin el atl. 2017). The idea seems to be a valid variant, however, the quality of the paper is not good. The introduction and related works sections read well, but the rest of the paper has not been written well. More specifically, the content in section 3 and experiment section is messy. Also the experiments have not been conducted thoroughly, and the results and the interpretation of the results are not complete. Introduction: Although in introduction the author discussed a lot of works on hierarchical latent variable model and some motivating examples, after reading it the reviewer has absolutely no idea what the paper is about (except hierarchical latent variable model), what is the motivation, what is the general idea, what is the contribution of the paper. Only after carefully reading the detailed implementation in section 3.1 and section 5, did I realize that what the authors are actually doing is to use N variables to model N different parts of the observation, and one higher level variable to model the N variables. The paper should really more precisly state what the idea is throughout the paper, instead of causing confusion and ambiguity. Section 3: 1. The concepts of "disentanglment" and "local connectivity" are really unnecessary and confusing. First, the whole paper and experiments has nothing to do with "local connectivity". Even though you might have the intention to propose the idea, you didn't show any support for the idea. Second, what you actually did is to use top level variable to generate N latent variables. That could hardly called "disentanglement". The mean field factorization in (Kingma & Welling 2013) is on the inference side (Q not P), and as found out in literature, it could not achieve disentanglement. 2. In section 3.2, I understand that you want to say the hierarchical model may require less data sample. But, here you are really off-topic. It would be much better if you can relate to the proposed method, and state how it may require less data. 3. Section 3.1 is more important, and is really major part of your method. Therefore, it need more extensive discussion and emphasis. Experiment: This section is really bad. 1. Since in the introduction and related works, there are already so many hierarchical latent variable model listed, the baseline methods should really not just vanilla GAN, but hierarchical latent variable models, such as the Hierachical VAE, Variational Ladder Autoencoder in (Zhao et al. 2017), ALI (not hierarchical, but should be a baseline) in (Dumoulin et al. 2017), etc. 2. Since currently there is still no standard way to evaluate the quality of image generation, by giving only inception score, we can really not judge whether it is good or not. You need to give more metrics, or generation examples, recontruction examples, and so on. And equally importantly, compare and discuss about the results. Not just leave it there. 3. For section 5.2, similar problems as above exist. Baseline methods might be insufficient. The paper only shows several examples, and the reviewer cannot draw any conclusion about it. Nor does the paper discuss any of the results. 4. Section 5.3, second to the last line, typo: "This is shown in 7". Also this result is not available. 5. More importantly, some experiments should be conducted to explicitly show the validity of the proposed hierarchical latent model idea. Show that it exists and works by some experiment explicitly. Another suggestion the review would like to make is that, instead of proposing the general framework in section 2, it would be better to propose the hierarchical model in the context of section 3.1. That is, instead of saying z_0 -> z_1 ->... ->x, what the paper and experiment is really about is z_0 -> z_{1,1}, z_{1,2} ... z_{1,N} -> x_{1}, x_{2},...,x_{N}, where z_{1,1...N} are distinct variables. How section 2 is related to the learning of this might be concatenating these N distinct variables into one (if that's what you mean). Talking about the joint distribution and inference process in this way might more align with your idea. Also, the paper actually only deals with 2 level. It seems to me that it's meaningless to generalize to n levels in section 2, since you do not have any support of it. In conclusion, the reviewer thinks that this work is incomplete and does not worth publishing with its current quality. ============================================================== The reviewer read the response from the authors. However, I do not think the authors resolved the issues I mentioned. And I am still not convinced by the quality of the paper. I would say the idea is not bad, but the paper is still not well-prepared. So I do not change my decision.
iclr_2018_B1EPYJ-C-
Federated Learning is a machine learning setting where the goal is to train a highquality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude.
The authors examine several techniques that lead to low communication updates during distributed training in the context of Federated learning (FL). Under the setup of FL, it is assumed that training takes place over edge-device like compute nodes that have access to subsets of data (potentially of different size), and each node can potentially be of different computational power. Most importantly, in the FL setup, communication is the bottleneck. Eg a global model is to be trained by local updates that occur on mobile phones, and communication cost is high due to slow up-link. The authors present techniques that are of similar flavor to quantized+sparsified updates. They distinguish theirs approaches into 1) structured updates and 2) sketched updates. For 1) they examine a low-rank version of distributed SGD where instead of communicating full-rank model updates, the updates are factored into two low rank components, and only one of them is optimized at each iteration, while the other can be randomly sampled. They also examine random masking, eg a sparsification of the updates, that retains a random subset of the entries of the gradient update (eg by zero-ing out a random subset of elements). This latter technique is similar to randomized coordinate descent. Under the theme of sketched updates, they examine quantized and sparsified updates with the property that in expectation they are identical to the true updates. The authors specifically examine random subsampling (which is the same as random masking, with different weights) and probabilistic quantization, where each element of a gradient update is randomly quantized to b bits. The major contribution of this paper is their experimental section, where the authors show the effects of training with structured, or sketched updates, in terms of reduced communication cost, and the effect on the training accuracy. They present experiments on several data sets, and observe that among all the techniques, random quantization can have a significant reduction of up to 32x in communication with minimal loss in accuracy. My main concern about this paper is that although the presented techniques work well in practice, some of the algorithms tested are similar algorithms that have already been proven to work well in practice. For example, it is unclear how the performance of the presented quantization algorithms compares to say QSGD [1] and Terngrad [2]. Although the authors cite QSGD, they do not directly compare against it in experiments. As a matter of fact, one of the issues of the presented quantized techniques (the fact that random rotations might be needed when the dynamic range of elements is large, or when the updates are nearly sparse) is easily resolved by algorithms like QSGD and Terngrad that respect (and promote) sparsity in the updates. A more minor comment is that it is unclear that averaging is the right way to combine locally trained models for nonconvex problems. Recently, it has been shown that averaging can be suboptimal for nonconvex problems, eg a better averaging scheme can be used in place [3]. However, I would not worry too much about that issue, as the same techniques presented in this paper apply to any weighted linear averaging algorithm. Another minor comment: The legends in the figures are tiny, and really hard to read. Overall this paper examines interesting structured and randomized low communication updates for distributed FL, but lacks some important experimental comparisons. [1] QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks https://arxiv.org/abs/1610.02132 [2] TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning https://arxiv.org/abs/1705.07878 [3] Parallel SGD: When does averaging help? https://arxiv.org/abs/1606.07365
iclr_2018_Sy1f0e-R-
Despite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs' performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of how to evaluate the evaluation metrics. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect.
This paper introduces a comparison between several approaches for evaluating GANs. The authors consider the setting of a pre-trained image models as generic representations of generated and real images to be compared. They compare the evaluation methods based on five criteria termed disciminability, mode collapsing and mode dropping, sample efficiency,computation efficiency, and robustness to transformation. This paper has some interesting insights and a few ideas of how to validate an evaluation method. The topic is an important one and a very difficult one. However, the work has some problems in rigor and justification and the conclusions are overstated in my view. Pros -Several interesting ideas for evaluating evaluation metrics are proposed -The authors tackle a very challenging subject Cons -It is not clear why GANs are the only generative model considered -Unprecedented visual quality as compared to other generative models has brought the GAN to prominence and yet this is not really a big factor in this paper. -The evaluations rely on using a pre-trained imagenet model as a representation. The authors point out that different architectures yield similar results for their analysis, however it is not clear how the biases of the learned representations affect the results. The use of learned representations needs more rigorous justification -The evaluation for discriminative metric, increased score when mix of real and unreal increases, is interesting but it is not convincing as the sole evaluation for “discriminativeness” and seems like something that can be gamed. - The authors implicitly contradict the argument of Theis et al against monolithic evaluation metrics for generative models, but this is not strongly supported. Several references I suggest: https://arxiv.org/abs/1706.08500 (FID score) https://arxiv.org/abs/1511.04581 (MMD as evaluation)
iclr_2018_rkhCSO4T-
In recent work, it was shown that combining multi-kernel based support vector machines (SVMs) can lead to near state-of-the-art performance on an action recognition dataset (HMDB-51 dataset). In the present work, we show that combining distributed Gaussian Processes with multi-stream deep convolutional neural networks (CNN) alleviate the need to augment a neural network with handcrafted features. In contrast to prior work, we treat each deep neural convolutional network as an expert wherein the individual predictions (and their respective uncertainties) are combined into a Product of Experts (PoE) framework.
This paper, although titled "Distributed Non-Parametric Deep and Wide Networks", is mostly about fusion of existing models for action recognition on the HMDB51 dataset. The fusion is performed with Gaussian Processes, where each of the i=1,..,4 inspected models (TSN-Inception RGB, TSN-Inception Flow, ResNet-LSTM RGB, ResNet-LSTM Flow) returns a (\mu_i, \sigma_i), which are then combined in a product of experts formulation, optimized w.r.t. maximum likelihood. At its current form this paper is unfit for submission. First, the novelty of the paper is not clear. It is stated that a framework is introduced for independent deep neural networks. However, this framework, the Gaussian Processes, already exists. Also, it is stated that the method can classify video snippets that have heterogeneity regarding camera angle, video quality, pose, etc. This is something characterizes also all other methods that report similar results on the same dataset. The third claim is that deep networks are combined with non-parameteric Bayesian models. That is a good claim, which is also shared between papers at http://bayesiandeeplearning.org/. The last claim is that model averaging taking into account uncertainty is shown to be useful. That is not true, the only result are the final accuracies per GP model, there is no experiment that directly reports any results regarding uncertainty and its contribution to the final accuracy. Second, it is not clear that the proposed method is the one responsible for the reported improvements in the experiments. Currently, the training set is split into 7 sets, and each set is used to train 4 models, totalling 28 GP experts. It is unclear what new is learned by the 7 GP expert models for the 7 splits. Why is this better than training a single model on the whole dataset? Also, why is difference bigger between ResNet Fusion-1 and Resnet SVM-SingleKernel? Third, the method reports results only on a single dataset, HMDB51, which is also rather small. Deriving conclusions from results on a single dataset is suboptimal. Other datasets that can be considered are (mini) Kinetics or Charades. Forth, the paper does not have the structure of a scientific publication. It rather looks like an unofficial technical report. There is no related work. The methodology section reads more like a tutorial of existing methods. And the discussion section is larger than any other section in the paper. All in all, there might be some interesting ideas in the paper, specifically how to integrate GPs with deep nets. However, at the current stage the submission is not ready for publication.
iclr_2018_BkiIkBJ0b
Deep reinforcement learning (DRL) algorithms have demonstrated progress in learning to find a goal in challenging environments. As the title of the paper by Mirowski et al. (2016) suggests, one might assume that DRL-based algorithms are able to "learn to navigate" and are thus ready to replace classical mapping and path-planning algorithms, at least in simulated environments. Yet, from experiments and analysis in this earlier work, it is not clear what strategies are used by these algorithms in navigating the mazes and finding the goal. In this paper, we pose and study this underlying question: are DRL algorithms doing some form of mapping and/or path-planning? Our experiments show that the algorithms are not memorizing the maps of mazes at the testing stage but, rather, at the training stage. Hence, the DRL algorithms fall short of qualifying as mapping or path-planning algorithms with any reasonable definition of mapping. We extend the experiments in Mirowski et al. (2016) by separating the set of training and testing maps and by a more ablative coverage of the space of experiments. Our systematic experiments show that the NavA3C+D 1 D 2 L algorithm, when trained and tested on the same maps, is able to choose the shorter paths to the goal. However, when tested on unseen maps the algorithm utilizes a wall-following strategy to find the goal without doing any mapping or path planning.
Science is about reproducible results and it is very commendable from scientists to hold their peers accountable for their work by verifying their results. It is also necessary to inspect claims that are made by researchers to avoid the community straying in the wrong direction. However, any critique needs to be done properly, by 1) attending to the actual claims that were made in the first place, by 2) reproducing the results in the same way as in the original work, 3) by avoiding introducing false claims based on a misunderstanding of terminology and 4) by extensively researching the literature before trying to affirm that a general method (here, Deep RL) cannot solve certain tasks. This paper is a critique of deep reinforcement learning methods for learning to navigate in 3D environments, and seems to focus intensively on one specific paper (Mirowski et al, 2016, “Learning to Navigate in Complex Environments”) and one of the architectures (NavA3C+D1D2L) from that paper. It conducts an extensive assessment of the methods in the critiqued paper but does not introduce any alternative method. For this reason, I had to carefully re-read the critiqued paper to be able to assess the validity of the arguments made in this submission and to evaluate its merit from the point of view of the quality of the critique. The (Mirowski et al, 2016) paper shows that a neural network-based agent with LSTM-based memory and auxiliary tasks such as depth map prediction can learn to navigate in fixed environments (3D mazes) with a fixed goal position (what they call “static maze”), and in fixed mazes with changing goal environments (what they call “environments with dynamic elements” or “random goal mazes”). This submission claims that: [a] “[based on the critiqued paper] one might assume that DRL-based algorithms are able to 'learn to navigate' and are thus ready to replace classical mapping and path-planning algorithms”, [b] “following training and testing on constant map structures, when trained and tested on the same maps, [the NavA3C+D1D2L algorithm] is able to choose the shorter paths to the goal”, [c] “when tested on unseen maps the algorithm utilizes a wall-following strategy to find the goal without doing any mapping or path planning”, [d] “this state-of-the-art result is shown to be successful on only one map, which brings into question the repeatability of the results”, [e] “Do DRL-based navigation algorithms really 'learn to navigate'? Our results answer this question negatively.” [f] “we are the first to evaluate any DRL-based navigation method on maps with unseen structures” The paper also conducts an extensive analysis of the performance of a different version of the NavA3C+D1D2L algorithm (without velocity inputs, which probably makes learning path integration much more difficult), in the same environments but by introducing unjustified changes (e.g., with constant velocities and a different action space) and with a different reward structure (incorporating a negative reward for wall collisions). While the experimental setup does not match (Mirowski et al, 2016), thereby invalidating claim [d], the experiments are thorough and do show that that architecture does not generalize to unseen mazes. The use of attention heat maps is interesting. The main problem however is that it seems that this submission completely misrepresents the intent of (Mirowski et al, 2016) by using a straw man argument, and makes a rather unacademic and unsubstantiated accusation of lack of repeatability of the results. Regarding the former, I could not find any claim that the methods in (Mirowski et al, 2017) learn mapping and path planning in unseen environments, that could support claim [a]. More worryingly, when observing that the method of (Mirowski et al, 2017) may not generalize to unseen environments in claim [c], the authors of this submission seem to confuse navigation, cartography and SLAM, and attribute to that work claims that were never made in the first place, using a straw man argument. Navigation is commonly defined as the goal driven control of an agent, following localization, and is a broad skill that involves the determination of position and direction, with or without a map of the environment (Fox 1998, ” Markov Localization: A Probabilistic Framework for Mobile Robot Localization and Navigation”). This widely accepted definition of navigation does not preclude being limited to known environments only. Regarding repeatability, the claim [d] is contradicted in section 5 when the authors demonstrate that the NavA3C+D1D2L algorithm does achieve a reduction in latency to goal in 8 out of 10 experiments on random goal, static map and random or static spawns. The experiments in section 5.3 are conducted in simple but previously unseen maps and cannot logically contradict results (Mirowski et al, 2016) achieved by training on static maps such as their “I-maze”. Moreover, claim [d] about repeatability is also invalidated by the fact that the experiments described in the paper use different observations (no velocity inputs), different action space, different reward structure, with no empirical evidence to support these changes. It seems, as the authors also claim in [b], that the work of (Mirowski et al, 2017), which was about navigation in known environments, actually is repeatable. Additionally, some statements made by the authors are demonstrably untrue. First, the authors claim that they are the first to train DRL agents in all random mazes [f], but this has been already shown in at least two publications (Mnih et al, 2016 and Jaderberg et al, 2016). Second, the title of the submission, “Do Deep Reinforcement Learning Algorithms Really Learn to Navigate” makes a broad statement [e] that cannot be logically invalidated by only one particular set of experiments on a particular model and environment, particularly since it directly targets one specific paper (out of several recent papers that have addressed navigation) and one specific architecture from that paper, NavA3C+D1D2L (incidentally, not the best-performing one, according to table 1 in that paper). Why did the authors not cite and consider (Parisotto et al, 2017, “Neural Map: Structured Memory for Deep Reinforcement Learning”), which explicitly claims that their method is “capable of generalizing to environments that were not seen during training”? It seems that the authors need to revise both their bibliography and their logical reasoning: one cannot invalidate a broad set of algorithms for a broad goal, simply by taking a specific example and showing that it does not fit a particular interpretation of navigation *in previously unseen environments*.
iclr_2018_H1BO9M-0Z
Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the metalearner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledges 1 . We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.
This paper presents a lifelong learning method for learning word embeddings. Given a new domain of interest, the method leverages previously seen domains in order to hopefully generate better embeddings compared to ones computed over just the new domain, or standard pre-trained embeddings. The general problem space here -- how to leverage embeddings across several domains in order to improve performance in a given domain -- is important and relevant to ICLR. However, this submission needs to be improved in terms of clarity and its experiments. In terms of clarity, the paper has a large number of typos (I list a few at the end of this review) and more significantly, at several points in the paper is hard to tell what exactly was done and why. When presenting algorithms, starting with an English description of the high-level goal and steps of the algorithm would be helpful. What are the inputs and outputs of the meta-learner, and how will it be used to obtain embeddings for the new domain? The paper states the purpose of the meta learning is "to learn a general word context similarity from the first m domains", but I was never sure what this meant. Further, some of the paper's pseudocode includes unexplained steps like "invert by domain index" and "scanco-occurrence". In terms of the experiments, the paper is missing some important baselines that would help us understand how well the approach works. First, besides the GloVe common crawl embeddings used here, there are several other embedding sets (including the other GloVe embeddings released along with the ones used here, and the Google News word2vec embeddings) that should be considered. Also, the paper never considers concatenations of large pre-trained embedding sets with each other and/or with the new domain corpus -- such concatenations often give a big boost to accuracy, see : "Think Globally, Embed Locally—Locally Linear Meta-embedding of Words", Bollegala et al., 2017 https://arxiv.org/pdf/1709.06671.pdf That paper is not peer reviewed to my knowledge so it is not necessary to compare against the new methods introduced there, but their baselines of concatenation of pre-trained embedding sets should be compared against in the submission. Beyond trying other embeddings, the paper should also compare against simpler combination approaches, including simpler variants of its own approach. What if we just selected the one past domain that was most similar to the new domain, by some measure? And how does the performance of the technique depend on the setting of m? Investigating some of these questions would help us understand how well the approach works and in which settings. Minor: Second paragraph, GloVec should be GloVe "given many domains with uncertain noise for the new domain" -- not clear what "uncertain noise" means, perhaps "uncertain relevance" would be more clear The text refers to a Figure 3 which does not exist, probably means Figure 2. I didn't understand the need for both figures, Figure 1 is almost contained within Figure 2 When m is introduced, it would help to say that m < n and justify why dividing the n domains into two chunks (of m and n-m domains) is necessary. "from the first m domain corpus" -> "from the first m domains"? "may not helpful" -> "may not be helpful" "vocabularie" -> "vocabulary" "system first retrieval" -> "system first retrieves" COMMENTS ON REVISIONS: I appreciate the authors including the new experiments against concatenation baselines. The concatenation does fairly comparably to LL in Tables 3&4. LL wins by a bit more in Table 2. Given these somewhat close/inconsistent wins, it would help the paper to include an explanation of why and under what conditions the LL approach will outperform concatenation.
iclr_2018_SJzMATlAZ
Clustering high-dimensional datasets is hard because interpoint distances become less informative in high-dimensional spaces. We present a clustering algorithm that performs nonlinear dimensionality reduction and clustering jointly. The data is embedded into a lower-dimensional space by a deep autoencoder. The autoencoder is optimized as part of the clustering process. The resulting network produces clustered data. The presented approach does not rely on prior knowledge of the number of ground-truth clusters. Joint nonlinear dimensionality reduction and clustering are formulated as optimization of a global continuous objective. We thus avoid discrete reconfigurations of the objective that characterize prior clustering algorithms. Experiments on datasets from multiple domains demonstrate that the presented algorithm outperforms state-of-the-art clustering schemes, including recent methods that use deep networks.
As authors stated, the proposed DCC is very similar to RCC-DR (Shah & Koltun, 2007). The only difference in (3) from RCC-DR is the decoding part, which is replaced by autoencoder instead of linear transformation used in RCC-DR. Authors claimed that there are three major differences. However, due to the highly nonconvex properties of both formulations, the last two differences hardly support the advantages of the proposed DCC comparing with RCC-DR because the solutions obtained by both optimization approaches are local solutions, unless authors can claim that the gradient-based solver is better than alternating approach in RCC-DR. Hence, DCC is just a simple extension of RCC-DR. In Section 3.2, how does the optimization algorithm handle the equality constraints in (5)? It is unclear why the existing autoencoder solver can be used to solve (3) or (5). It seems that the first term in (5) corresponds to the objective of autoencoder, but the last two terms added lead to different objective with respect to variables y. It is better to clarify the correctness of the optimization algorithm. Authors claimed that the proposed method avoid discrete reconfiguration of the objective that characterize prior clustering algorithms, and it does not rely on a priori knowledge of the number of ground-truth clusters. However, it seems not true since the graph construction at every epoch depends on the initial parameter delta_2 and the graph is constructed such that f_{i,j}=1 if distance is less than delta_2. As a result, delta_2 is a fixed threshold for graph construction, so it is indirectly related to the number of clusters generated. In the experiments, authors set it as the mean of the bottom 1% of the pairwise distances in E at initialization, and clustering assignment is given by connected component in the last graph. This parameter might be sensitive to the final results. Many terms in the paper are not well explained. For example, in (1), theta are treated as parameters to optimize, but what is the theta used for? Does the Omega related to encoder and decoder of the parameters in autoencoder. What is the scaled Geman-McClure function? Any reference? Why should this estimator be used? From the visualization results in Figure 1, it is interesting to see that K-means++ can achieve much better results on the space learned by DCC than that by SDAE from Table 2. In Figure 1, the embedding by SDAE (Figure 1(b)) seems more suitable for kmeans-like algorithm than DCC (Figure 1(c)). That is the reason why connected component is used for cluster assignment in DCC, not kmeans. The results between Table 2 and Figure 1 might be interesting to investigate.
iclr_2018_BkoCeqgR-
This is an empirical paper which constructs color invariant networks and evaluates their performances on a realistic data set. The paper studies the simplest possible case of color invariance: invariance under pixel-wise permutation of the color channels. Thus the network is aware not of the specific color object, but its colorfulness. The data set introduced in the paper consists of images showing crashed cars from which ten classes were extracted. An additional annotation was done which labeled whether the car shown was red or non-red. The networks were evaluated by their performance on the classification task. With the color annotation we altered the color ratios in the training data and analyzed the generalization capabilities of the networks on the unaltered test data. We further split the test data in red and non-red cars and did a similar evaluation. It is shown in the paper that an pixel-wise ordering of the rgb-values of the images performs better or at least similarly for small deviations from the true color ratios. The limits of these networks are also discussed.
The paper proposes and evaluates a method to make neural networks for image recognition color invariant. The contribution of the paper is: - some proposed methods to extract a color-invariant representation - an experimental evaluation of the methods on the cifar 10 dataset - a new dataset "crashed cars" - evaluation of the best method from the cifar10 experiments on the new dataset Pros: - the crashed cars dataset is interesting. The authors have definitely found an interesting untapped source of interesting images. Cons: - The authors name their method order network but the method they propose is not really parts of the network but simple preprocessing steps to the input of the network. - The paper is incomplete without the appendices. In fact the paper is referring to specific figures in the appendix in the main text. - the authors define color invariance as a being invariant to which specific color an object in an image does have, e.g. whether a car is red or green, but they don't think about color invariance in the broader context - color changes because of lighting, shades, ..... Also, the proposed methods aim to preserve the "colorfullness" of a color. This is also problematic, because while the proposed method works for a car that is green or a car that is red, it will fail for a car that is black (or white) - because in both cases the "colorfulness" is not relevant. Note that this is specifically interesting in the context of the task at hand (cars) and many cars being, white, grey (silver), or black. - the difference in the results in table 1 could well come from the fact that in all of the invariant methods except for "ord" the input is a WxHx1 matrix, but for "ord" and "cifar" the input is a "WxHx3" matrix. This probably leads to more parameters in the convolutions. - the results in the figure 4: it's very unlikely that the differences reported are actually significant. It appears that all methods perform approximately the same - and the authors pick a specific line (25k steps) as the relevant one in which the RGB-input space performs best. The proposed method does not lead to any relevant improvement. Figure 6/7: are very hard to read. I am still not sure what exactly they are trying to say. Minor comments: - section 1: "called for is network" -> called for is a network - section 1.1: And and -> And - section 1.1: Appendix -> Appendix C - section 2: Their exists many -> There exist many - section 2: these transformation -> these transformations - section 2: what does "the wallpaper groups" refer to? - section 2: are a groups -> are groups - section 3.2: reference to a non-existing figure - section 3.2/Training: 2499999 iterations = steps? - section 3.2/Training: longer as suggested -> longer than suggested
iclr_2018_rkmtTJZCb
Much recent research has been devoted to video prediction and generation, yet a lot of previous work has been focused on short-scale time horizons. The hierarchical video prediction method by Villegas et al. (2017) is an example of a state of the art method for long term video prediction. However, their method has limited applicability in practical settings as it requires a ground truth pose (e.g., poses of joints of a human) at training time. This paper presents a long term hierarchical video prediction model that does not have such a restriction. We show that the network learns its own higher level structure (e.g., pose-equivalent hidden variables) that works better in cases where the ground truth pose does not fully capture all of the information needed to predict the next frame. This method gives sharper results than other video prediction methods which do not require a ground truth pose, and its efficiency is shown on the Humans 3.6M and Robot Pushing datasets.
The paper presents a method for hierarchical future frame prediction in monocular videos. It builds upon the recent method of Villegas et al. 2017, which generates future RGB frames in two stages: in the first stage, it predicts a human body pose sequence, then it conditions on the pose sequence to predict RGB content, using an image analogy network. This current paper, does not constrain the first stage (high level) prediction to be human poses, but instead it can be any high level representation. Thus, the method does not require human annotations. The method has the following two sub-networks: 1) An image encoder, that given an RGB image, predicts a deep feature encoding. 2) An LSTM predictor, that conditioned on the last observed frame's encoding, predicts future high level structure p_t. Once enough frames are generated though, it conditions on its own predictions. 3) A visual analogy network (VAN), that given predicted high level structure p_t, it predicts the pixel image I_t, by applying the transformation from the first to tth frame, as computed by the vector subtraction of the corresponding high level encodings (2nd equation of the paper). VAN is trained to preserve parallelogram relationships in the joint RGB image and high level structure embedding. The authors experiment with many different neural network connectivities, e.g., not constraining the predicted high level structure to match the encoder's outputs, constraining the predicted high level structure to match the encoder's output (EPEV), and training together the VAN and predictor so that VAN can tolerate mistakes of the predictor. Results are shown in H3.6m and the pushobject datasets, and are compared against the method of Villegas et all (INDIVIDUAL). The conclusion seems to be that not constraining the predicted high level structure to match the encoder’s output, but biasing the encoder’s output in the observed frames to represent ground-truth pose information, gives the best results. Pros 1) Interesting alternative training schemes are tested Cons: 1)Numerous English mistakes, e.g., ''an intelligent agents", ''we explore ways generate" etc. 2) Equations are not numbered (and thus is hard to refer to them.) E.g., i do not understand the first equation, shouldn’t it be that e_{t-1} is always fixed and equal to the encoding of the last observed (not predicted) frame? Then the subscript cannot be t-1. 3) In H3.6M, the results are only qualitative. The conclusions from the paper are uncertain, partly due to the difficulty of evaluating the video prediction results. Given the difficulty of assessing the experimental results quantitatively (one possibility to do so is asking a set of people of which one they think is the most plausible video completion), and given the limited novelty of the paper, though interesting alternative architectures are tried out, it may not be suitable to be part of ICLR proceedings as a conference paper.
iclr_2018_S1347ot3b
Vector semantics, especially sentence vectors, have recently been used successfully in many areas of natural language processing. However, relatively little work has explored the internal structure and properties of spaces of sentence vectors. In this paper, we will explore the properties of sentence vectors by studying a particular real-world application: Automatic Summarization. In particular, we show that cosine similarity between sentence vectors and document vectors is strongly correlated with sentence importance and that vector semantics can identify and correct gaps between the sentences chosen so far and the document. In addition, we identify specific dimensions which are linked to effective summaries. To our knowledge, this is the first time specific dimensions of sentence embeddings have been connected to sentence properties. We also compare the features of different methods of sentence embeddings. Many of these insights have applications in uses of sentence embeddings far beyond summarization.
This paper examines a number of sentence and document embedding methods for automatic summarization. It pairs a number of recent sentence embedding algorithms (e.g., Paragraph Vectors and Skip-Thought Vectors) with several simple summarization decoding algorithms for sentence selection, and evaluates the resulting output summary on DUC 2004 using ROUGE, based on the general intuition that the selected summary should be similar to the original document in the vector space induced by the embedding algorithm. It further provides a number of analyses of the sentence representations as they relate to summarization, and other aspects of the summarization process including the decoding algorithm. The paper was well written and easy to understand. I appreciate the effort to apply these representation techniques in an extrinsic task. However, the signficance of the results may be limited, because the paper does not respond to a long line of work in summarization literature which have addressed many of the same points. In particular, I worry that the paper may in part be reinventing the wheel, in that many of the results are quite incremental with respect to previous observations in the field. Greedy decoding and non-redundancy: many methods in summarization use greedy decoding algorithms. For example, SumBasic (Nenkova and Vanderwende, 2005), and HierSum (Haghighi and Vanderwende, 2009) are two such papers. This specific topic has been thoroughly expanded on by the work on greedy decoding for submodular objective functions in summarization (Lin and Bilmes, 2011), as well as many papers which focus on how to optimize for both informativeness and non-redundancy (Kulesza and Taskar, 2012). The idea that the summary should be similar to the entire document is known as centrality. Some papers that exploit or examine that property include (Nenkova and Vanderwende, 2005; Louis and Nenkova, 2009; Cheung and Penn, 2013) Another possible reading of the paper is that its novelty lies in the evaluation of sentence embedding models, specifically. However, these methods were not designed for summarization, and I don't see why they should necessarily work well for this task out of the box with simple decoding algorithms without finetuning. Also, the ROUGE results are so far from the SotA that I'm not sure what the value of analyzing this suite of techniques is. In summary, I understand that this paper does not attempt to produce a state-of-the-art summarization system, but I find it hard to understand how it contributes to our understanding of future progress in the summmarization field. If the goal is to use summarization as an extrinsic evaluation of sentence embedding models, there needs to be better justification of this is a good idea when there are so many other issues in content selection that are not due to sentence embedding quality, but which affect summarization results. References: Nenkova and Vanderwende, 2005. The impact of frequency on summarization. Tech report. Haghighi and Vanderwende, 2009. Exploring content models for multi-document summarization. NAACL-HLT 2009. Lin and Bilmes, 2011. A class of submodular functions for document summarization. ACL-HLT 2011. Kulesza and Taskar, 2012. Learning Determinantal Point Processes. Louis and Nenkova, 2009. Automatically evaluating content selection in summarization without human models. EMNLP 2009. Cheung and Penn, 2013. Towards Robust Abstractive Multi-Document Summarization: A Caseframe Analysis of Centrality and Domain. ACL 2013. Other notes: The acknowledgements seem to break double-blind reviewing.
iclr_2018_ryup8-WCW
Published as a conference paper at ICLR 2018 MEASURING THE INTRINSIC DIMENSION OF OBJECTIVE LANDSCAPES Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.
This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (corresponding to a low-dimensional subspace within the original parameter), and then training the original network while restricting the projections of its parameters to lie within this subspace. Performance on this subspace is then evaluated relative to that over the full parameter space (the baseline). As an empirical standard, the authors focus on the subspace dimension that achieves a performance of 90% of the baseline. The authors then test out their measure of intrinsic dimensionality for fully-connected networks and convolutional networks, for several well-known datasets, and draw some interesting conclusions. Pros: * This paper continues the recent research trend towards a better characterization of neural networks and their performance. The authors show a good awareness of the recent literature, and to the best of my knowledge, their empirical characterization of the number of latent parameters is original. * The characterization of the number of latent variables is an important one, and their measure does perform in a way that one would intuitively expect. For example, as reported by the authors, when training a fully-connected network on the MNIST image dataset, shuffling pixels does not result in a change in their intrinsic dimensionality. For a convolutional network the observed 3-fold rise in intrinsic dimension is explained by the authors as due to the need to accomplish the classification task while respecting the structural constraints of the convnet. * The proposed measures seem very practical - training on random projections uses far fewer parameters than in the original space (the baseline), and presumably the cost of determining the intrinsic dimensionality would presumably be only a fraction of the cost of this baseline training. * Except for the occasional typo or grammatical error, the paper is well-written and organized. The issues are clearly identified, for the most part (but see below...). Cons: * In the main paper, the authors perform experiments and draw conclusions without taking into account the variability of performance across different random projections. Variance should be taken into account explicitly, in presenting experimental results and in the definition and analysis of the empirical intrinsic dimension itself. How often does a random projection lead to a high-quality solution, and how often does it not? * The authors are careful to point out that training in restricted subspaces cannot lead to an optimal solution for the full parameter domain unless the subspace intersects the optimal solution region (which in general cannot be guaranteed). In their experiments (FC networks of varying depths and layer widths for the MNIST dataset), between projected and original solutions achieving 90% of baseline performance, they find an order of magnitude gap in the number of parameters needed. This calls into question the validity of random projection as an empirical means of categorizing the intrinsic dimensionality of a neural network. * The authors then go on to propose that compression of the network be achieved by random projection to a subspace of dimensionality greater than or equal to the intrinsic dimension. However, I don't think that they make a convincing case for this approach. Again, variation is the difficulty: two different projective subspaces of the same dimensionality can lead to solutions that are extremely different in character or quality. How then can we be sure that our compressed network can be reconstituted into a solution of reasonable quality, even when its dimensionality greatly exceeds the intrinsic dimension? * The authors argue for a relationship between intrinsic dimensionality and the minimum description length (MDL) of their solution, in that the intrinsic dimensionality should serve as an upper bound on the MDL. However they don't formally acknowledge that there is no standard relationship between the number of parameters and the actual number of bits needed to represent the model - it varies from setting to setting, with some parameters potentially requiring many more bits than others. And given this uncertain connection, and given the lack of consideration given to variation in the proposed measure of intrinsic dimensionality, it is hard to accept that "there is some rigor behind" their conclusion that LeNet is better than FC networks for classification on MNIST because its empirical intrinsic dimensionality score is lower. * The experimental validation of their measure of intrinsic dimension could be made more extensive. In the main paper, they use three image datasets - MNIST, CIFAR-10 and ImageNet. In the supplemental information, they report intrinsic dimensions for reinforcement learning and other training tasks on four other sets. Overall, I think that this characterization does have the potential to give insights into the performance of neural networks, provided that variation across projections is properly taken into account. For now, more work is needed. ==================================================================================================== Addendum: The authors have revised their paper to take into account the effect of variation across projections, with results that greatly strengthen their results and provide a much better justification of their approach. I'm satisfied too with their explanations, and how they incorporated them into their revised version. I've adjusted my rating of the paper accordingly. One point, however: the revisions seem somewhat rushed, due to the many typos and grammatical errors in the updated sections. I would like to encourage the authors to check their manuscript once more, very carefully, before finalizing the paper. ====================================================================================================
iclr_2018_rybAWfx0b
Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.
This paper present a simple but effective approach to utilize language model information in a seq2seq framework. The experimental results show improvement for both baseline and adaptation scenarios. Pros: The approach is adapted from deep fusion but the results are promising, especially for the off-domain setup. The analysis also well-motivated about why cold-fusion outperform deep-fusion. Cons: (1) I have some question about the baseline. Why the decoder is single layer but for LM it is 2 layer? I suspect the LM may add something to it. For my own Seq2seq model, 2 layer decoder always better than one. Also, what is HMM/DNN/CTC baseline ? Since they use a internal dataset, it's hard to know how was the seq2seq numbers. The author also didn't compare with re-scoring method. (2) It would be more interesting to test it on more standard speech corpus, for example, SWB (conversational based) and librispeech (reading task). Then it's easier to reproduce and measure the quality of the model. (3) This paper only report results on speech recognition. It would be more interesting to test it on more area, e.g. Machine Translation. Missing citation: In (https://arxiv.org/pdf/1706.02737.pdf) section 3.3, they also pre-trained RNN-LM on more standard speech corpus. Also, need to compare with this type of shallow fusion. Updates: https://arxiv.org/pdf/1712.01769.pdf (Google's End2End system) use 2-layer LSTM decoder. https://arxiv.org/abs/1612.02695, https://arxiv.org/abs/1707.07413 and https://arxiv.org/abs/1506.07503) are small task. Battenberg et al. paper (https://arxiv.org/abs/1707.07413) use Seq2Seq as a baseline and didn't show any combined results of different #decoder layer vs. different LM integration method. My point is how a stronger decoder affect the results with different LM integration methods. In the paper, it still only compared with deep fusion with one decoder layer. Also, why it only compared shallow fusion w/ CTC model? I suspect deep decoder + shallow fusion already could provide good results. Or the gain is additive? Thanks a lot adding Librispeech results. But why use Wav2Letter paper (instead of refer to a peer reviewed paper)? The Wav2letter paper didn't compare with any baseline on librispeech (probably because librispeech isn't a common dataset, but at least the Kaldi baseline is there). In short, I'm still think this is a good paper but still slightly below the acceptance threshold.
iclr_2018_rkaqxm-0b
Answering compositional questions requiring multi-step reasoning is challenging for current models. We introduce an end-to-end differentiable model for interpreting questions, which is inspired by formal approaches to semantics. Each span of text is represented by a denotation in a knowledge graph, together with a vector that captures ungrounded aspects of meaning. Learned composition modules recursively combine constituents, culminating in a grounding for the complete sentence which is an answer to the question. For example, to interpret not green, the model will represent green as a set of entities, not as a trainable ungrounded vector, and then use this vector to parametrize a composition function to perform a complement operation. For each sentence, we build a parse chart subsuming all possible parses, allowing the model to jointly learn both the composition operators and output structure by gradient descent. We show the model can learn to represent a variety of challenging semantic operators, such as quantifiers, negation, disjunctions and composed relations on a synthetic question answering task. The model also generalizes well to longer sentences than seen in its training data, in contrast to LSTM and RelNet baselines. We will release our code.
This paper proposes for training a question answering model from answers only and a KB by learning latent trees that capture the syntax and learn the semantic of words, including referential terms like "red" and also compositional operators like "not". I think this model is elegant, beautiful and timely. The authors do a good job of explaining it clearly. I like the modules of composition that seem to make a very intuitive sense for the "algebra" that is required and the parsing algorithm is clean. However, I think that the evaluation is lacking, and in some sense the model exposes the weakness of the dataset that it uses for evaluation. I have 2.5 major issues with the paper and a few minor comments: Parsing: * The authors don't really say what is the base case for \Psi that scores tokens (unless I missed it and if indeed it is missing it really needs to be added) and only provide the recursive case. From that I understand that the only features that they use are whether a certain word makes sense in a certain position of the rule application in the context of the question. While these features are based on Durrett et al.'s neural syntactic parser it seems like a pretty weak signal to learn from. This makes me wonder, how does the parser learn whether one parse is better than the other? Only based on this signal? It makes me suspicious that the distribution of language is not very ambiguous and that as long as you can construct a tree in some context you can do it in almost any other context. This is probably due to the fact that the CLEVR dataset was generated mostly using templates and is not really natural utterances produced by people. Of course many people have published on CLEVR although of its language limitations, but I was a bit surprised that only these features are enough to solve the problem completely, and this makes me curious as to how hard is it to reverse-engineer the way that the language was generated with a context-free mechanism that is similar to how the data was produced. * Related to that is that the decision for a score of a certain type t for a span (i,j) is the sum for all possible rule applications, rather than a max, which again means that there is no competition between different parse trees that result with the same type of a single span. Can the authors say something about what the parser learns? Does it learn to extract from the noise clear parse trees? What is the distribution of rules in those sums? is there some rule that is more preferred than others usually? It seems like there is loss of information in the sum and it is unclear what is the effect of that in the paper. Evaluation: * Related to that is indeed the fact that they use CLEVR only. There is now the Cornell NLVR dataset that is more challenging from a language perspective and it would be great to have an evaluation there as well. Also the authors only compare to 3 baselines where 2 don't even see the entire KB, so the only "real" baseline is relation net. The authors indeed state that it is state-of-the-art on clevr. * It is worth noting that relation net is reported to get 95.5 accuracy while the authors have 89.4. They use a subset so this might be the reason, but I am not sure how they compared to relation net exactly. Did they re-tune parameters once you have the new dataset? This could make a difference in the final accuracy and cause an unfair advantage. * I would really appreciate more analysis on the trees that one gets. Are sub-trees interpretable? Can one trace the process of composition? This could have been really nice if one could do that. The authors have a figure of a purported tree, but where does this tree come from? From the mode? Form the authors? Scalability: * How much of a problem would it be to scale this? Will this work in larger domains? It seems they compute an attention score over every entity and also over a matrix that is squared in the number of entities. So it seems if the number of entities is large that could be very problematic. Once one moves to larger KBs it might become hard to maintain full differentiability which is one of the main selling points of the paper. Minor comments: * I think the phrase "attention" is a bit confusing - I thought of a distribution over entities at first. * The feature function is not super clearly written I think - perhaps clarify in text a bit more what it does. * I did not get how the denotation that is based on a specific rule applycation t_1 + t_2 --> t works. Is it by looking at the grounding that is the result of that rule application? * Authors say that the neural enquirer and neural symbolic machines produce flat programs - that is not really true, the programs are just a linearized form of a tree, so there is nothing very flat about it in my opinion. Overall, I really enjoyed reading the paper, but I was left wondering whether the fact that it works so well mostly attests to the way the data was generated and am still wondering how easy it would be to make this work in for more natural language or when the KB is large.
iclr_2018_ryiAv2xAZ
Published as a conference paper at ICLR 2018 TRAINING CONFIDENCE-CALIBRATED CLASSIFIERS FOR DETECTING OUT-OF-DISTRIBUTION SAMPLES The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in-and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.
This paper proposes a new method of detecting in vs. out of distribution samples. Most existing approaches for this deal with detecting out of distributions at *test time* by augmenting input data and or temperature scaling the softmax and applying a simple classification rule based on the output. This paper proposes a different approach (with could be combined with these methods) based on a new training procedure. The authors propose to train a generator network in combination with the classifier and an adversarial discriminator. The generator is trained to produce images that (1) fools a standard GAN discriminator and (2) has high entropy (as enforced with the pull-away term from the EBGAN). Classifier is trained to not only maximize classification accuracy on the real training data but also to output a uniform distribution for the generated samples. The model is evaluated on CIFAR-10 and SVNH, where several out of distribution datasets are used in each case. Performance gains are clear with respect to the baseline methods. This paper is clearly written, proposes a simple model and seems to outperform current methods. One thing missing is a discussion of how this approach is related to semi-supervised learning approaches using GANS where a generative model produces extra data points for the classifier/discriminator. I have some clarifying questions below: - Figure 4 is unclear: does "Confidence loss with original GAN" refer to the method where the classifier is pretrained and then "Joint confidence loss" is with joint training? What does "Confidence loss (KL on SVHN/CIFAR-10)" refer to? - Why does the join training improve the ability of the model to generalize to out-of-distribution datasets not seen during training? - Why is the pull away term necessary and how does the model perform without it? Most GAN models are able to stably train without such explicit terms such as the pull away or batch discrimination. Is the proposed model unstable without the pull-away term? - How does this compare with a method whereby instead of pushing the fake sample's softmax distribution to be uniform, the model is simply a trained to classify them as an additional "out of distribution" class? This exact approach has been used to do semi supervised learning with GANS [1][2]. More generally, could the authors comment on how this approach is related to these semi-supervised approaches? - Did you try combining the classifier and discriminator into one model as in [1][2]? [1] Semi-Supervised Learning with Generative Adversarial Networks (https://arxiv.org/abs/1606.01583) [2] Good Semi-supervised Learning that Requires a Bad GAN (https://arxiv.org/abs/1705.09783)
iclr_2018_S1sRrN-CW
We study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks-neural KB embedding and tensor decomposition. In this work, we theoretically analyze the neural embedding framework and subsequently connect it with tensor based embedding. Specifically, we show that in neural KB embedding the two commonly adopted optimization solutionsmargin-based and negative sampling losses-are closely related to each other. We also reach the closed-form tensor that is implicitly approximated by popular neural KB approaches, revealing the underlying connection between neural and tensor based KB embedding models. Grounded in the theoretical results, we further present a tensor decomposition based framework KBTD to directly approximate the derived closed form tensor. Under this framework, the neural KB embedding models, such as NTN (Socher et al., 2013), TransE (Bordes et al., 2013), Bilinear (Jenatton et al., 2012), and DISTMULT (Yang et al., 2015), are unified into a general tensor optimization architecture. Finally, we conduct experiments on the link prediction task in WordNet and Freebase, empirically demonstrating the effectiveness of the KBTD framework.
The paper proposes a new method to train knowledge base embeddings using a least-squares loss. For this purpose, the paper introduces a reweighting scheme of the entries in the original adjacency tensor. The reweighting is derived from an analysis of the cross-entropy loss. In addition, the paper discusses the connections of the margin and cross-entropy loss and evaluates the proposed method on WN18 and FB15k. The paper tackles an interesting problem, as learning from knowledge bases via embedding methods has become increasingly important for tasks such as question answering. Providing additional insight into current methods can be an important contribution to advance the state-of-the-art. However, I'm concerned about several aspects in the current form of the paper. For instance, the derivation in Section 4 is unclear to me, as eq.4 suddenly introduces a weighted sum over expectations using the degrees of nodes. The derivation also seems to rely on a very specific negative sampling assumption (uniform sampling without checking whether the corrupted triple is a true negative). This sampling method isn't used consistently across models and also brings its own problems, e.g., see the LCWA discussion in [4] In addition, the semantics that are introduced by the weighting scheme are not clear to me either. Using the proposed method, the probability of edges between high-degree nodes are down-weighted, since the ground-truth labels are divided by the node degrees. Since these weighted labels are then fitted using a least-squares loss, this implies that links between high-degree nodes should be less likely, which seems the opposite of what the scores should look like. With regard to the significance of the contributions: Using a least-squares loss in combination with tensor methods is attractive because it enables ALS algorithms with closed-form updates that can be computed very fast. However, the proposed method still relies on SGD optimization. In this context, it is not clear to me why a tensor framework/least-squares loss would be preferable. Further comments: - The paper seems to equate "tensor method" with using a least squares loss. However, this doesn't have to be the case. For instance see [1,2] which propose Logistic and Poisson tensor factorizations, respectively. - The distinction between tensor factorization and neural methods is unclear. Tensor factorization can be interpreted just as a particular scoring function. For instance, see [5] for a detailed discussion. - The margin based ranking loss has been proposed earlier than in (Collobert et al, 2011). For instance see [3] - p1: corrupted triples are not described entirely correct, typically only one of s or o is corrputed. - Closed-form tensor in Table 1: This should be least-squares loss of f(s,p,o) and log(...)? - p6: Adding the constant to the tensor as proposed in (Levy & Goldberg, 2014) can done while gathering the minibatch and is therefore equivalent to the proposed approach. [1] Nickel et al: Logistic Tensor Factorization for Multi-Relational Data, 2013. [2] Chi et al: "On tensors, sparsity, and nonnegative factorizations", 2012 [3] Collobert et al: A unified architecture for natural language processing, 2008 [4] Dong et al: Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion, 2014 [5] Nickel et al: A Review of Relational Machine Learning for Knowledge Graphs, 2016.
iclr_2018_S1Euwz-Rb
Published as a conference paper at ICLR 2018 COMPOSITIONAL ATTENTION NETWORKS FOR MACHINE REASONING We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic blackbox architectures towards a design that provides a strong prior for iterative reasoning, allowing it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient and data-efficient, requiring an order of magnitude less time and/or data to achieve good results.
This paper describes a new model architecture for machine reasoning. In contrast to previous approaches that explicitly predict a question-specific module network layout, the current paper introduces a monolithic feedforward network with iterated rounds of attention and memory. On a few variants of the CLEVR dataset, it outperforms both discrete modular approaches, existing iterated attention models, and the conditional-normalization-based FiLM model. So many models are close to perfect accuracy on the standard CLEVR dataset that I'm not sure how interesting these results are. In this respect I think the current paper's results on CLEVR-Humans and smaller fractions of synthetic CLEVR are much more exciting. On the whole I think this is a strong paper. I have two main concerns. The largest is that this paper offers very little in the way of analysis. The model is structurally quite similar to a stacked attention network or a particular fixed arrangement of attentive N2NMN modules, and it's not at all clear based on the limited set of experimental results where the improvements are actually coming from. It's also possible that many of the proposed changes are complementary to NMN- or CBN-type models, and it would be nice to know if this is the case. Secondarily, the paper asserts that "our architecture can handle datasets more diverse than CLEVR", but runs no experiments to validate this. It seems like once all the pieces are in place it should be very easy to get numbers on VQA or even a more interesting synthetic dataset like NLVR. Based on a sibling comment, it seems that there may also be some problems with the comparison to FiLM, and I would like to see this addressed. On the whole, the results are probably strong enough on their own to justify admitting this paper. But I will become much more enthusiastic about if if the authors can provide results on other datasets (even if they're not state-of-the-art!) as well as evidence for the following: 1. Does the control mechanism attend to reasonable parts of the sentence? Here it's probably enough to generate a bunch of examples showing sentence attentions evolving over time. 2. Do these induce reasonable attentions over regions of the image? Again, examples are fine. 3. Do the self-attention and gating mechanisms recover the right structure? In addition to examples, here I think there are some useful qualitative measures. It should be possible to extract reasonable discretized "reasoning maps" by running MST or just thesholding on the "edge weights" induced by attention and gating. Having extracted these from a bunch of examples, you can compare them to the structural properties of the ground-truth CLEVR network layouts by plotting a comparison of sizes, branching factors, etc. 4. More on the left side of the dataset size / accuracy curve. What happens if you only give the model 7000 examples? 700? 70? Fussy typographical notes: - This paper makes use of a lot of multi-letter names in mathmode. These are currently written like $KB$, which looks bad, and should instead be $\mathit{KB}$. - Variables with both superscripts and subscripts have the superscripts pushed off to the right; I think you're writing these like $b_5 ^d$ but they should just be $b_5^d$ (no space). - Number equations and then don't bother carrying subscripts like $W_3$, $W_4$ around across different parts of the model---this isn't helpful. - The superscripts indicating the dimensions of parameter matrices and vectors are quite helpful, but don't seem to be explained anywhere in the text. I think the notation $W^{(d \times d)}$ is more standard than $W^{d, d}$. - Put the cell diagrams right next to the body text that describes them (maybe even inline, rather than in figures). It's annoying to flip back and forth.
iclr_2018_HJjePwx0-
In this paper, we develop a trust region method for training deep neural networks. At each iteration, trust region method computes the search direction by solving a non-convex subproblem. Solving this subproblem is non-trivial-existing methods have only sub-linear convergence rate. In the first part, we show that a simple modification of gradient descent algorithm can converge to a global minimizer of the subproblem with an asymptotic linear convergence rate. Moreover, our method only requires Hessian-vector products, which can be computed efficiently by back-propagation in neural networks. In the second part, we apply our algorithm to train large-scale convolutional neural networks, such as VGG and MobileNets. Although trust region method is about 3 times slower than SGD in terms of running time, we observe it finds a model that has lower generalization (test) error than SGD, and this difference is even more significant in large batch training. We conduct several interesting experiments to support our conjecture that the trust region method can avoid sharp local minima.
**I am happy to see some good responses from the authors to my questions. I am raising my score a bit higher. Summary: A new stochastic method based on trust region (TR) is proposed. Experiments show improved generalization over mini-batch SGD, which is the main positive aspect of this paper. The main algorithm has not been properly developed; there is too much focus on the convergence aspects of the inner iterations, for which there are many good algorithms already in the optimization literature. There are no good explanations for why the method yields better generalization. Overall, TR seems like an interesting idea, but it has neither been carefully expanded or investigated. Let me state the main interesting results before going into criticisms: 1. TR method seems to generalize better than mini-batch SGD. 2. TR seems to lose generalization more gracefully than SGD when batch size is increased. [But note here that mini-batch SGD is not a closed chapter. With better ways of adjusting the noise level via step-size control (larger step sizes mean more noise) the loss of generalization associated with large mini-batch sizes can be brought down. See, for example: https://arxiv.org/pdf/1711.00489.pdf.] 3. Hybrid method is even better. This only means that more understanding is needed as to how TR can be combined with SGD. Trust region methods are generally batch methods. Algorithm 1 is also stated from that thinking and it is a well-known optimization algorithm. The authors never mention mini-batch when Algorithm 1 is introduced. But the authors clearly have only the stochastic min-batch implementation of the algorithm in mind. One has to wait till we go into the experiments section to read something like: "Lastly, although in theory, we need full gradient and full Hessian to guarantee convergence, calculating them in each iteration is not practical, so we calculate both Hessian and gradient on subsampled data to replace the whole dataset" for readers to realize that the authors are talking about a stochastic mini-batch method. This is a bad way of introducing the main method. This stochastic version obviously requires a step size; so it would have been proper to state the stochastic version of the algorithm instead of the batch algorithm in Algorithm 1. Instead of saying that in passing why not explicitly state it in key places, including the abstract and title? I suggest TR be replaced by "Stochastic TR" everywhere. Also, what does "step size" mean in the TR method? I suggest that all these are fully clarified as parts of Algorithm 1 itself. Trust region subproblem (TRS) has been analyzed and developed so much in the optimization literature. For example, the conjugate gradient-based method leading to the Steihaug-Toint point is so much used. [Note: Here, the gradient refers to the gradient of the quadratic model, and it uses only Hessian-vector products.] http://www.ii.uib.no/~trond/publications/papers/trust.pdf. The authors spend so much effort developing their own algorithm! Also, in actual implementation, they only use a crude version of the inner algorithm for reasons of efficiency. The paper does not say anything about the convergence of the full algorithm. How good are the trust region updates based on q_t given the huge variability associated with the mini-batch operation? The authors should look at several existing papers on stochastic trust region and stochastic quasi-Newton methods, e.g., papers from Katya Scheinberg (Lehigh) and Richard Byrd (Colorado)'s groups. The best-claimed method of the method, called "Hybrid method" is also mentioned only in passing, and that too in a scratchy fashion (see end of subsec 4.3): "To enjoy the best of both worlds, we also introduce a “hybrid” method in the Figure 3, that is, first run TR method for several epochs to get coarse solution and then run SGD for a while until fully converge. Our rule of thumb is, when the training accuracy raises slowly, run SGD for 10 epochs (because it’s already close to minimum). We find this “hybrid” method is both fast and accurate, for both small batch and large batch." Explanations of better generalization properties of TR over SGD are important. I feel this part is badly done in the paper. For example, there is this statement: "We observe that our method (TR) converges to solutions with much better test error but worse training error when batch size is larger than 128. We postulate this is because SGD is easy to overfit training data and “stick” to a solution that has a high loss in testing data, especially with the large batch case as the inherent noise cannot push the iterate out of loss valley while our TR method can." Frankly, I am unable to decipher what is being said here. There is an explanation indicating that switching from SGD to TR causes an uphill movement (which I presume, is due to the trust region radius r being large); but statements such as - this will lead to climbing over to a wide minimum etc. are too strong; no evidence is given for this. There is a statement - "even if the exact local minima is reached, the subsampled Hessian may still have negative curvature" - again, there is no evidence. Overall, the paper only has a few interesting observations, but there is no good and detailed experimental analysis that help explain these observations. The writing of the paper needs a lot of improvement.
iclr_2018_rJVruWZRW
We propose the dense RNN, which has fully connections from each hidden state directly to multiple preceding hidden states of all layers. As the density of the connection increases, the number of paths through which the gradient flows is increased. The magnitude of gradients is also increased, which helps to prevent the vanishing gradient problem in time. Larger gradients, however, may cause exploding gradient problem. To complement the trade-off between two problems, we propose an attention gate, which controls the amounts of gradient flows. We describe the relation between the attention gate and the gradient flows by approximation. The experiment on the language modeling using Penn Treebank corpus shows dense connections with the attention gate improve the model's performance of the RNN.
Summary: This paper proposes a fully connected dense RNN architecture that has connections to every layer and the preceding connections of each layer. The connections are also gated by using a simple gating mechanism. The authors very briefly discusses about the effect of these on the dynamics of the learning. They report results on PTB character-level language modelling task. Questions: What is the computational complexity of this approach compared to a vanilla RNN architecture? What is the implications of these skip connections in terms of memory consumption during BPTT? Did you use gradient clipping and have you used any specific type of initialization for the parameters? How would this approach would compare against the Clockwork RNNs which has a block-diagonal weight matrices? [1] How would dense-RNNs compare against to the MANNs [2]? How would you implement this model efficiently? Pros: Interesting idea. Cons: Lack of experiments and empirical results supporting the arguments. Hand-wavy theory. Lack of references to the relevant literature. General Comments: In general the paper is relatively well written despite having some minor typos. The idea is interesting, however the experiments in this paper is seriously lacking. The only results presented in this paper is on PTB. The results are quite behind the SOTA and PTB is a really tiny, toyish language modeling task. The theory is very hand-wavy, the connections to the previous attempts to come up with related properties of the recurrent models should be cited. The Figure 2 is very related to the Gersgorin circle theorem in [3]. The discussion about the skip-connections is very related to the results in [2]. Overall, I think this paper is rushed and not ready for the publication. [1] Koutnik, J., Greff, K., Gomez, F., & Schmidhuber, J. (2014, January). A clockwork rnn. In International Conference on Machine Learning (pp. 1863-1871). [2] Gulcehre, Caglar, Sarath Chandar, and Yoshua Bengio. "Memory Augmented Neural Networks with Wormhole Connections." arXiv preprint arXiv:1701.08718 (2017). [3] Zilly, Julian Georg, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. "Recurrent highway networks." arXiv preprint arXiv:1607.03474 (2016).
iclr_2018_rkhxwltab
This research paper describes a simplistic architecture named as AANN: Absolute Artificial Neural Network, which can be used to create highly interpretable representations of the input data. These representations are generated by penalizing the learning of the network in such a way that those learned representations correspond to the respective labels present in the labelled dataset used for supervised training; thereby, simultaneously giving the network the ability to classify the input data. The network can be used in the reverse direction to generate data that closely resembles the input by feeding in representation vectors as required. This research paper also explores the use of mathematical abs (absolute valued) functions as activation functions which constitutes the core part of this neural network architecture. Finally the results obtained on the MNIST dataset by using this technique are presented and discussed in brief.
This paper introduces a reversible network with absolute value used as the activation function. The network is run in the forward direction to classify and in the reverse direction to generate. The key points of the network are the use of the absolute value activation function and the use of (free) normalization to match target output. This allows the network to perfectly map inputs to any point on a vector that goes through the one-hot encoding, allowing for deterministic generation from different vectors (of different lengths) with the same normalized output. I think there are a lot of novel and interesting ideas in this paper though they have not been fully explored. The use of the absolute value transfer function is new to me, though I was able to find a couple of old references to its use. In a paper by Gad et al. (2000), it is stated " For example, the algorithm presented in Lin and Unbehauen (1995) < I think they mean Lin and Unbehauen 1990)> is used to train networks with a single hidden layer employing the absolute value as the activation function of the hidden neuron. This algorithm was further generalized to multilayer networks with cascaded structures in Batruni (1991)." Exploring the properties of the abs activation function seems worth exploring. More details on the training are needed for full clarity in the paper. (Though it is recognized that some of these could be determined from links when made active, they should be included in the paper). How did you select the training parameters given at the bottom of page 5? How many layers and units/layer did you use? And how were these selected? (The use of the links for providing code and visualizations (when active) is a nice feature of this paper). Also, did you compare to using the leaky ReLU activation function -- That would be interesting as it also doesn't have any areas of zero slope? Did you compare the generated digits to those obtained using GANs? I am also curious, how does accuracy on digit classification differ when trained only to optimize the forward error? The MNIST site referenced lists 60,000 training data and test data of 10,000. How/why did you select 42,000 and then split it to 39900 in the train set and 2100 in the dev set? Also, the goal for the paper is presented as creating highly interpretable representations of the input data. My interpretation of interpretable is that the hidden units are "interpretable" and that it is clear how the combined hidden unit representations allow for accurate classification. Towards that end, it would be nice to see some of the interpretations of the hidden unit representations. In the abstract it states " ...These representations are generated by penalizing the learning of the network in such a way that those learned representations correspond to the respective labels present in the labelled dataset used for supervised training". Does this statement refer only to the encoding of the representation vector or also the hidden layers? If the former, isn't that true for all supervised algorithms. If the latter, you should show this. Batruni, R. (1991). A multilayer neural network with piecewise-linear structure and backpropagation learning. IEEE Transactions on Neural Networks, 2, 395–403. Lin, J.-N., & Unbehauen, R. (1995). Canonical piecewise-linear neural networks. IEEE Transactions on Neural Networks, 6, 43–50. Lin, J.-N, & Unbehauen, R. (1990). Adaptive Nonlinear Digital Filter with Canonical Piecewise-Linear Structure, IEEE Transactions on Circuits and Systems, 37(3) 347-353. Gad, E.F et al (2000). A new algorithm for learning in piecewise-linear neural networks. Neural Networks 13, 485-505.