id
stringlengths
11
20
paper_text
stringlengths
29
163k
review
stringlengths
666
24.3k
iclr_2018_BJNRFNlRW
Published as a conference paper at ICLR 2018 TRAINING GENERATIVE ADVERSARIAL NETWORKS VIA PRIMAL-DUAL SUBGRADIENT METHODS: A LAGRANGIAN PERSPECTIVE We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively. This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization. The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training. The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods. A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN. Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples.
This paper formulates GAN as a Lagrangian of a primal convex constrained optimization problem. They then suggest to modify the updates used in the standard GAN training to be similar to the primal-dual updates typically used by primal-dual subgradient methods. Technically, the paper is sound. It mostly leverages the existing literature on primal-dual subgradient methods to modify the GAN training procedure. I think this is a nice contribution that does yield to some interesting insights. However I do have some concerns about the way the paper is currently written and I find some claims misleading. Prior convergence proofs: I think the way the paper is currently written is misleading. The authors quote the paper from Ian Goodfellow: “For GANs, there is no theoretical prediction as to whether simultaneous gradient descent should converge or not.”. However, the f-GAN paper gave a proof of convergence, see Theorem 2 here: https://arxiv.org/pdf/1606.00709.pdf. A recent NIPS paper by (Nagarajan and Kolter, 2017) also study the convergence properties of simultaneous gradient descent. Another problem is of course the assumptions required for the proof that typically don’t hold in practice (see comment below). Convex-concave assumption: In practice the GAN objective is optimized over the parameters of the neural network rather than the generative distribution. This typically yields a non-convex non-concave optimization problem. This should be mentioned in the paper and I would like to see a discussion concerning the gap between the theory and the practical algorithm. Relation to existing regularization techniques: Combining Equations 11 and 13, the second terms acts as a regularizer that minimizes [\lapha f_1(D(x_i))]^2. This looks rather similar to some of the recent regularization techniques such as Improved Training of Wasserstein GANs, https://arxiv.org/pdf/1704.00028.pdf Stabilizing Training of Generative Adversarial Networks through Regularization, https://arxiv.org/pdf/1705.09367.pdf Can the authors comment on this? I think this would also shed some light as to why this approach alleviates the problem of mode collapse. Curse of dimensionality: Nonparametric density estimators such as the KDE technique used in this paper suffer from the well-known curse of dimensionality. For the synthetic data, the empirical evidence seem to indicate that the technique proposed by the authors does work but I’m not sure the empirical evidence provided for the MNIST and CIFAR-10 datasets is sufficient to judge whether or not the method does help with mode collapse. The inception score fails to capture this property. Could the authors explore other quantitative measure? Have you considered trying your approach on the augmented version of the MNIST dataset used in Metz et al. (2016) and Che et al. (2016)? Experiments Typo: Should say “The data distribution is p_d(x) = 1{x=1}”.
iclr_2018_HkMvEOlAb
Published as a conference paper at ICLR 2018 LEARNING LATENT REPRESENTATIONS IN NEURAL NETWORKS FOR CLUSTERING THROUGH PSEUDO SUPERVISION AND GRAPH-BASED ACTIVITY REGULARIZATION In this paper, we propose a novel unsupervised clustering approach exploiting the hidden information that is indirectly introduced through a pseudo classification objective. Specifically, we randomly assign a pseudo parent-class label to each observation which is then modified by applying the domain specific transformation associated with the assigned label. Generated pseudo observation-label pairs are subsequently used to train a neural network with Auto-clustering Output Layer (ACOL) that introduces multiple softmax nodes for each pseudo parent-class. Due to the unsupervised objective based on Graph-based Activity Regularization (GAR) terms, softmax duplicates of each parent-class are specialized as the hidden information captured through the help of domain specific transformations is propagated during training. Ultimately we obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets, with the highest accuracies reported to date in the literature.
This paper presents a method for clustering based on latent representations learned from the classification of transformed data after pseudo-labellisation corresponding to applied transformation. Pipeline: -Data are augmented with domain-specific transformations. For instance, in the case of MNIST, rotations with different degrees are applied. All data are then labelled as "original" or "transformed by ...(specific transformation)". -Classification task is performed with a neural network on augmented dataset according to the pseudo-labels. -In parallel of the classification, the neural network also learns the latent representation in an unsupervised fashion. -k-means clustering is performed on the representation space observed in the hidden layer preceding the augmented softmax layer. Detailed Comments: (*) Pros -The method outperforms the state-of-art regarding unsupervised methods for handwritten digits clustering on MNIST. -Use of ACOL and GAR is interesting, also the idea to make "labeled" data from unlabelled ones by using data augmentation. (*) Cons -minor: in the title, I find the expression "unsupervised clustering" uselessly redundant since clustering is by definition unsupervised. -Choice of datasets: we already obtained very good accuracy for the classification or clustering of handwritten digits. This is not a very challenging task. And just because something works on MNIST, does not mean it works in general. What are the performances on more challenging datasets like colored images (CIFAR-10, labelMe, ImageNet, etc.)? -This is not clear what is novel here since ACOL and GAR already exist. The novelty seems to be in the adaptation to GAR from the semi-supervised to the unsupervised setting with labels indicating if data have been transformed or not. My main problem was about the lack of novelty. The authors clarified this point, and it turned out that ACOL and GAR have never published elsewhere except in ArXiv. The other issue concerned the validation of the approach on databases other than MNIST. The author also addressed this point, and I changed my scores accordingly.
iclr_2018_S1nQvfgA-
Published as a conference paper at ICLR 2018 SEMANTICALLY DECOMPOSING THE LATENT SPACES OF GENERATIVE ADVERSARIAL NETWORKS We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm's ability to generate convincing, identity-matched photographs.
[Overview] In this paper, the authors proposed a model called SD-GAN, to decompose semantical component of the input in GAN. Specifically, the authors proposed a novel architecture to decompose the identity latent code and non-identity latent code. In this new architecture, the generator is unchanged while the discriminator takes pair data as the input, and output the decision of whether two images are from the same identity or not. By training the whole model with a conventional GAN-training regime, SD-GAN learns to take a part of the input Z as the identity information, and the other part of input Z as the non-identity (or attribute) information. In the experiments, the authors demonstrate that the proposed SD-GAN could generate images preserving the same identity with diverse attributes, such as pose, age, expression, etc. Compared with AC-GAN, the proposed SD-GAN achieved better performance in both automatically evaluation metric (FaceNet) and Human Study. In the appendix, the authors further presented ablated qualitative results in various settings. [Strengths] 1. This paper proposed a simple but effective generative adversarial network, called SD-GAN, to decompose the input latent code of GAN into separate semantical parts. Specifically, it is mainly instantiated on face images, to decompose the identity part and non-identity part in the latent code. Unlike the previous works such as AC-GAN, SD-GAN exploited a Siamese network to replace the conventional discriminator used in GAN. By this way, SD-GAN could generate images of novel identities, rather than being constrained to those identities used during training. I think this is a very good property. Due to this, SD-GAN consumes much less memory than AC-GAN, when training on a large number of identities. 2. In the experiment section, the authors quantitatively evaluate the generated images based on two methods, one is using a pre-trained FaceNet model to measure the verification accuracy and one is human study. When evaluated based on FaceNet, the proposed SD-GAN achieved higher accuracy and obtained more diverse face images, compared with AC-GAN. In human study, SD-GAN achieved comparable verification accuracy, while higher diversity than AC-GAN. The authors further presented ablated experiments in the Appendix. [Comments] This paper presents a novel model to decompose the latent code in a semantic manner. However, I have several questions about the model: 1. Why would SD-GAN not generate images merely have a smaller number of identities or just a few identities? In Algorithm 1, the authors trained the model by sampling one identity vector, which is then concatenated to two observation vectors. In this case, the generator always takes the same identity vectors, and the discriminator is used to distinguish these fake same-identity pair and the real same-identity pair from training data. As such, even if the generator generates the same identity, say mean identity, given different identity vectors, the generated images can still obtain a lower discrimination loss. Without any explicite constraint to enforce the generator to generate different identity with different identity vectors, I am wondering what makes SD-GAN be able to generate diverse identities? 2. Still about the identity diversity. Though the authors showed the identity-matched diversity in the experiments, the diversity across identity on the generated images is not evaluated. The authors should also evaluate this kind of identity. Generally, AC-GAN could generate as many identities as the number of identities in training data. I am curious about whether SD-GAN could generate comparable diverse identity to AC-GAN. One simple way is to evaluate the whole generated image set using Inception Score based on a Pre-trained face identification network; Another way is to directly use the generated images to train a verification model or identification model and evaluate it on real images. Though compared with AC-GAN, SD-GAN achieved better identity verification performance and sample diversity, I suspect the identity diversity is discounted, though SD-GAN has the property of generating novel identities. Furthermore, the authors should also compare the general quality of generated samples with DC-GAN and BEGAN as well (at least qualitatively), apart from the comparison to AC-GAN on the identity-matched generation. 3. When making the comparison with related work, the authors mentioned that Info-GAN was not able to determine which factors are assigned to each dimension. I think this is not precise. The lack of this property is because there are no data annotations. Given the data annotations, Info-GAN can be easily augmented with such property by sending the real images into the discriminator for classification. Also, there is a typo in the caption of Fig. 10. It looks like each column shares the same identity vector instead of each row. [Summary] This paper proposed a new model called SD-GAN to decompose the input latent code of GAN into two separete semantical parts, one for identity and one for observations. Unlike AC-GAN, SD-GAN exploited a Siamese architecture in discriminator. By this way, SD-GAN could not only generate more identity-matched face image pairs but also more diverse samples with the same identity, compared with AC-GAN. I think this is a good idea for decomposing the semantical parts in the latent code, in the sense that it can imagine new face identities and consumes less memory during training. Overall, I think this is a good paper. However, as I mentioned above, I am still not clear why SD-GAN could generate diverse identities without any constraints to make the model do that. Also, the authors should further evaluate the diversity of identity and compare it with AC-GAN.
iclr_2018_HJsk5-Z0W
In spite of their great success, traditional factorization algorithms typically do not support features (e.g., Matrix Factorization), or their complexity scales quadratically with the number of features (e.g, Factorization Machine). On the other hand, neural methods allow large feature sets, but are often designed for a specific application. We propose novel deep factorization methods that allow efficient and flexible feature representation. For example, we enable describing items with natural language with complexity linear to the vocabulary size-this enables prediction for unseen items and avoids the cold start problem. We show that our architecture can generalize some previously published single-purpose neural architectures. Our experiments suggest improved training times and accuracy compared to shallow methods.
This paper presents a method for matrix factorization using DNNs. The suggestion is to make the factorization machine (eqn 1) deep, by grouping the features meaningfully (eqn 5), extracting nonlinear features from original inputs (deep-in, eqn 8), and adding additional nonlinearity after computing pairwise interactions (deep-out, eqn 7). From the methodology point of view, such extensions are relatively straightforward. As an example, from the experimental results, it seems the grouping of features is done mostly with domain knowledge (e.g., months of year) and not learned automatically. The authors claim the proposed method can circumvent the cold-start problem, and presented some experimental results on recommendation systems with text features. While the application problems look quite interesting, in my opinion, the paper needs to make the context and contribution clearer. In particular, there is a huge literature in collaborative filtering, and I believe there is by now sufficient work on collaborative filtering with input features (and possibly dealing with the cold-start problem). I think this paper does not connect very well with that literature. When reading it, at times I felt the main purpose of this paper is to solve the application problems presented in experimental results, instead of proposing a general framework. I suggest the authors to demonstrate their method on some well-known datasets (e.g., MovieLens, Netflix), to give the readers an idea if the proposed method is indeed advantageous over more classical methods, or if the success of this paper is mostly due to clever processing of text features using DNNs. Some detailed comments: 1. eqn 4 does not indicate any rank-r factors. 2. some statements do not seem straightforward/justified to me: -- the paper uses the word "inference" several times without definition -- "if we were interested in interpreting the parameters, we could constrain w to be non-negative ... ". Is this easy to do, and can the authors demonstrate this in their experiments and show interpretable examples? -- "Note that if the dot product is replaced with a neural function, fast inference for cold-start ...". 3. the experimental setup seems quite unusual to me: "since we only observe positive labels, for such tasks in the test set we sample a labels according to the label frequency". This seems very problematic if most of the entries are not observed. Why cannot you use the typical evaluation procedure for collaborative filtering, where you hide some known entries during model training, and evaluate on these entries during test?
iclr_2018_HkCvZXbC-
We present 3C-GAN: a novel multiple generators structures, that contains one conditional generator that generates a semantic part of an image conditional on its input label, and one context generator generates the rest of an image. Compared to original GAN model, this model has multiple generators and gives control over what its generators should generate. Unlike previous multi-generator models use a subsequent generation process, that one layer is generated given the previous layer, our model uses a process of generating different part of the images together. This way the model contains fewer parameters and the generation speed is faster. Specifically, the model leverages the label information to separate the object from the image correctly. Since the model conditional on the label information does not restrict to generate other parts of an image, we proposed a cost function that encourages the model to generate only the succinct part of an image in terms of label discrimination. We also found an exclusive prior on the mask of the model help separate the object. The experiments on MNIST, SVHN, and CelebA datasets show 3C-GAN can generate different objects with different generators simultaneously, according to the labels given to each generator.
[Overview] This paper proposed a new generative adversarial network, called 3C-GAN for generating images in a composite manner. In 3C-GAN, the authors exploited two generators, one (G1) is for generating context images, and the other one (G2) is for generating semantic contents. To generate the semantic contents, the authors introduced a conditional GAN scheme, to force the generated images to match the annotations. After generating both parts in parallel, they are combined using alpha blending to compose the final image. This generated image is then sent to the discriminator. The experiments were conducted on three datasets, MNIST, SVHN and MS-CelebA. The authors showed qualitative results on all three datasets, demonstrating that AC-GAN could disentangle the context part from the semantic part in an image, and generate them separately. [Strenghts] This paper introduced a layered-wise image generation, which decomposed the image into two separate parts: context part, and semantic part. Corresponding to these two parts are two generators. To ensure this, the authors introduced three strategies: 1. Adding semantic labels: the authors used image semantic labels as the input and then exploited a conditional GAN to enforce one of the generators to generate semantic parts of images. As usual, the label information was added as the input of generator and discriminator as well. 2. Adding label difference cost: the intuition behind this loss is that changing the label condition should merely affect the output of G2. Based on this, outputs of Gc should not change much when flipping the input labels. 3. Adding exclusive prior: the prior is that the masks of context part (m1) and semantic part (m2) should be exclusive to each other. Therefore, the authors added another loss to reduce the sum of component-wise multiplication between m1 and m2. Decomposing the semantic part from the context part in an image based on a generative model is an interesting problem. However, to my opinion, completing it without any supervision is challenging and meaningless. In this paper, the authors proposed a conditional way to generate images compositionally. It is an interesting extension of previous works, such as Kwak & Zhang (2016) and Yang (2017). [Weaknesses] This paper proposed an interesting and intuitive image generation model. However, there are several weaknesses existed: 1. There is no quantitative evaluation and comparisons. From the limited qualitative results shown in Fig.2-10, we can hardly get a comprehensive sense about the model performance. The authors should present some quantitative evaluations in the paper, which are more persuasive than a number of examples. To do that, I suggest the authors exploited evaluation metrics, such as Inception Score to evaluate the overall generation performance. Also, in Yang (2017) the authors proposed adversarial divergence, which is suitable for evaluating the conditional generation. Hence, I suggest the authors use a similar way to evaluate the classification performance of classification model trained on the generated images. This should be a good indicator to show whether the proposed 3C-GAN could generate more realistic images which facilitate the training of a classifier. 2. The authors should try more complicated datasets, like CIFAR-10. Recently, CIFAR-10 has become a popular dataset as a testbed for evaluating various GANs. It is easy to train since its low resolution, but also means a lot since it a relative complicated scene. I would suggest the authors also run the experiments on CIFAR-10. 3. The authors did not perform any ablation study. Apart from several generation results based on 3C-GAN, iIcould not found any generation results from ablated models. As such, I can hardly get a sense of the effects of different losses and know about the relative performance in the whole GAN spectrum. I strongly suggest the authors add some ablation studies. The authors should at least compare with one-layer conditional GAN. 4. The proposed model merely showed two-layer generation results. There might be two reasons: one is that it is hard to extend it to more layer generation as I know, and the other one reason is the inflexible formulation to compose an image in 2.2.1 and formula (6). The authors should try some datasets like MNIST-TWO in Yang (2017) for demonstration. 5. Please show f1, m1, f2, m2 separately, instead of showing the blending results in Fig3, Fig4, Fig6, Fig7, Fig9, and Fig10. I would like to see what kind of context image and foreground image 3C-GAN has generated so that I can compare it with previous works like Kwak & Zhang (2016) and Yang (2017). 6. I did not understand very well the label difference loss in (5). Reducing the different between G_c(z_u, z_v, z_l) and G_c(z_u, z_v, z_l^f) seems not be able to force G1 and G2 to generate different parts of an image. G2 takes all the duty can still obtain a lower L_ld. From my point of view, the loss should be added to G1 to make G1 less prone to the variation of label information. 7. Minor typos and textual errors. In Fig.1, should the right generator be G2 rather than G1? In 2.1.3 and 2.2.1, please add numbers to the equations. [Summary] This paper proposed an interesting way of generating images, called 3C-GAN. It generates images in a layer-wise manner. To separate the context and semantic part in an image, the authors introduced several new techniques to enforce the generators in the model undertake different duties. In the experiments, the authors showed qualitative results on three datasets, MNIST, SVHN and CelebA. However, as I pointed out above, the paper missed quantitative evaluation and comparison, and ablation study. Taking all these into account, I think this paper still needs more works to make it solid and comprehensive before being accepted.
iclr_2018_BJGWO9k0Z
CRITICAL PERCOLATION AS A FRAMEWORK TO ANALYZE THE TRAINING OF DEEP NETWORKS In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to 128 layers.
This paper thoroughly analyzes an algorithmic task (determining if two points in a maze are connected, which requires BFS to solve) by constructing an explicit ConvNet solution and analytically deriving properties of the loss surface around this analytical solution. They show that their analytical solution implements a form of BFS algorithm, characterize the probability of introducing "bugs" in the algorithm as the weights move away from the optimal solution, and how this influences the error surface for different depths. This analysis is conducted by drawing on results from the field of critical percolation in physics. Overall, I think this is a good paper and its core contribution is definitely valuable: it provides a novel analysis of an algorithmic task which sheds light on how and when the network fails to learn the algorithm, and in particular the role which initialization plays. The analysis is very thorough and the methods described may find use in analyzing other tasks. In particular, this could be a first step towards better understanding the optimization landscape of memory-augmented neural networks (Memory Networks, Neural Turing Machines, etc) which try to learn reasoning tasks or algorithms. It is well-known that these are sensitive to initialization and often require running the optimizer with multiple random seeds and picking the best one. This work actually explains the role of initialization for learning BFS and how certain types of initialization lead to poor solutions. I am curious if a similar analysis could be applied to methods evaluated on the bAbI question-answering tasks (which can be represented as graphs, like the maze task) and possibly yield better initialization or optimization schemes that would remove the need for multiple random seeds. With that being said, there is some work that needs to be done to make the paper clearer. In particular, many parts are quite technical and may not be accessible to a broader machine learning audience. It would be good if the authors spent more time developing intuition (through visualization for example) and move some of the more technical proofs to the appendix. Specifically: - I think Figure 3 in the appendix should be moved to the main text, to help understand the behavior of the analytical solution. - Top of page 5, when you describe the checkerboard BFS: please include a visualization somewhere, it could be in the Appendix. - Section 6: there is lots of math here, but the main results don't obviously stand out. I would suggest highlighting equations 2 and 4 in some way (for example, proposition/lemma + proof), so that the casual reader can quickly see what the main results are. Interested readers can then work through the math if they want to. Also, some plots/visualizations of the loss surface given in Equations 4 and 5 would be very helpful. Also, although I found their work to be interesting after finishing the paper, I was initially confused by how the authors frame their work and where the paper was heading. They claim their contribution is in the analysis of loss surfaces (true) and neural nets applied to graph-structured inputs. This second part was confusing - although the maze can be viewed as a graph, many other works apply ConvNets to maze environments [1, 2, 3], and their work has little relation to other work on graph CNNs. Here the assumptions of locality and stationarity underlying CNNs are sensible and I don't think the first paragraph in Section 3 justifying the use of the CNN on the maze environment is necessary. However, I think it would make much more sense to mention how their work relates to other neural network architectures which learn algorithms (such as the Neural Turing Machine and variants) or reasoning tasks more generally (for example, memory-augmented networks applied to the bAbI tasks). There are lots of small typos, please fix them. Here are a few: - "For L=16, batch size of 20, ...": not a complete sentence. - Right before 6.1.1: "when the these such" -> "when such" - Top of page 8: "it also have a" -> "it also has a", "when encountering larger dataset" -> "...datasets" - First sentence of 6.2: "we turn to the discuss a second" -> "we turn to the discussion of a second" - etc. Quality: High Clarity: medium-low Originality: high Significance: medium-high References: [1] https://arxiv.org/pdf/1602.02867.pdf [2] https://arxiv.org/pdf/1612.08810.pdf [3] https://arxiv.org/pdf/1707.03497.pdf
iclr_2018_r154_g-Rb
The tasks that an agent will need to solve often aren't known during training. However, if the agent knows which properties of the environment are important, then after learning how its actions affect those properties the agent may be able to use this knowledge to solve complex tasks without training specifically for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a method that learns a policy for transitioning between "nearby" sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in grid-world games and 3D block stacking that our model is able to generalize to longer, more complex tasks at test time even when it only sees short, simple tasks at train time.
Summary: This paper proposes a method for planning which involves learning to detect high-level subgoals (called "attributes"), learning a transition model between subgoals, and then learning a policy for the low-level transitions between subgoals. The high-level task plan is not learned, but is computed using Dijkstra's algorithm. The benefit of this method (called the "Attribute Planner", or AP) is that it is able to generalize to tasks requiring multi-step plans after only training on tasks requiring single-step plans. The AP is compared against standard A3C baselines across a series of experiments in three different domains, showing impressive performance and demonstrating its generalization capability. Pros: - Impressive generalization results on multi-step planning problems. - Nice combination of model-based planning for the high-level task plan with model-free RL for the low-level actions. Cons: - Attributes are handcrafted and pre-specified rather than being learned. - Rather than learning an actual parameterized high-level transition model, a graph is built up out of experience, which requires a large sample complexity. - No comparison to other hierarchical RL approaches. Quality and Clarity: This is a great paper. It is extremely well written and clear, includes a very thorough literature review (though it should probably also discuss [1]), takes a sensible approach to combining high- and low-level planning, and demonstrates significant improvements over A3C baselines when generalizing to more complex task plans. The experiments and domains seem reasonable (though the block-stacking domain would be more interesting if the action and state spaces weren't discrete) and the analysis is thorough. While the paper in general is written very clearly, it would be helpful to the reader to include an algorithm for the AP. Originality and Significance: I am not an expert in hierarchical RL, but my understanding is that typically hierarchical RL approaches use high-level goals to make the task easier to learn in the first place, such as in tasks with long planning horizons (e.g. Montezuma's Revenge). The present work differs from this in that, as they state, "the goal of the model is to be able to generalize to testing on complex tasks from training on simpler tasks" (pg. 5). Most work I have seen does not explicitly test for this generalization capability, but this paper points out that it is important and worthwhile to test for. It is difficult to say how much of an improvement this paper is on top of other related hierarchical RL works as there are no comparisons made. I think it would be worthwhile to include a comparison to other hierarchical RL architectures (such as [1] or [2]), as I expect they would perform better than the A3C baselines. I suspect that the AP would still have better generalization capabilities, but it is hard to know without seeing the results. That said, I still think that the contribution of the present paper stands on its own. [1] Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., & Kavukcuoglu, K. (2017). FeUdal Networks for Hierarchical Reinforcement Learning. Retrieved from http://arxiv.org/abs/1703.01161 [2] Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., & Tenenbaum, J. B. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. Advances in Neural Information Processing Systems.
iclr_2018_SkaPsfZ0W
Graph Convolutional Networks (GCNs) are a recently proposed architecture which has had success in semi-supervised learning on graph-structured data. At the same time, unsupervised learning of graph embeddings has benefited from the information contained in random walks. In this paper we propose a model, Network of GCNs (N-GCN), which marries these two lines of work. At its core, N-GCN trains multiple instances of GCNs over node pairs discovered at different distances in random walks, and learns a combination of the instance outputs which optimizes the classification objective. Our experiments show that our proposed N-GCN model achieves state-of-the-art performance on all of the challenging node classification tasks we consider: Cora, Citeseer, Pubmed, and PPI. In addition, our proposed method has other desirable properties, including generalization to recently proposed semi-supervised learning methods such as GraphSAGE, allowing us to propose N-SAGE, and resilience to adversarial input perturbations.
In this work a new network of GCNs is proposed. Different GCNs utilize different powers of the transition matrix to capture varying neighborhoods in a graph. As an aggregation mechanism of the GCN modules two approaches are considered: a fully connected layer on top of stacked features and attention mechanism that uses a scalar weight per GCN. The later allows for better interpretability of the effects of varying degree of neighborhoods in a graph. Proposed approach, as authors noted themselves, is quite similar to DCNN (Atwood and Towsley, 2016) and becomes equivalent if the combined GCNs have one layer each. While comparison to vanilla GCN is quite extensive, there is no comparison to DCNN at all. I would be curious to see at least portion of the experiments of the DCNN paper with the proposed approach, where the importance of number of GCN layers is addressed. DCNN did well on Cora and Pubmed when more training samples were used. It also was tested on graph classification datasets, but the results were not as good for some of the datasets. I think that comparison to DCNN is important to justify the importance of using multilayer GCN modules. Some questions and concerns: - I could not quite figure out how many layers did each GCN have in the experiments and how impactful is this parameter - Why is it necessary to replicate GCNs for each of the transition matrix powers? In section 4.3 it is mentioned that replication factors r = 1 and r = 4 were used, but it is not clear from Table 2 what are the results for respective r. - Early stopping implementation seems a bit too intense. "We invoke many runs over all datasets" - how many? Mean and standard deviation are reported for top 3 performers, which is not enough to get a sense of standard deviation and mean. Kipf and Welling (2017) report results over 100 runs without selecting top performers if I understood correctly their setup. Could you please report mean and standard deviation of all the runs? Given relatively small performance improvement (comparatively to GCN), more than 3 (selected) runs are needed for comparison. - I liked the attention idea and its interpretation in Fig. 2. Could you please add the error bars for the attention weights. It is interesting to see them shifting towards higher powers of the transition matrix, but also it is important to know if this phenomena is statistically significant. - Following up on the previous item - did you try not including self connections when computing transition matrix powers? This way the effect of different degrees of neighborhoods in a graph could be understood better. When self-connections are present, each subsequent transition matrix power contains neighborhoods of lower degrees and interpretation becomes not as apparent. Minor comments: - Understanding of this paper quite heavily relies on the reader knowing Kipf and Welling (2017) paper. Particularly, the comment about approximations derived by Kipf and Welling (2017) in Section 3.3 and how directed graph was converted to undirected (Section 4.1) require a bit more details. - I am not quite sure why Section 2.3 is needed. Connection to graph embeddings is not given much attention in the paper later on (except t-SNE picture). - Typo in Fig. 1 caption - right and left are mixed up. - Typo in footnote on page 3.
iclr_2018_BJQRKzbA-
HIERARCHICAL REPRESENTATIONS FOR EFFICIENT ARCHITECTURE SEARCH We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.
The fundamental contribution of the article is the explicit use of compositionality in the definition of the search space. Instead of merely defining an architecture as a Directed Acyclic Graph (DAG), with nodes corresponding to feature maps and edges to primitive operations, the approach in this paper introduces a hierarchy of architectures of this form. Each level of the hierarchy utilises the existing architectures in the preceding level as candidate operations to be applied in the edges of the DAG. As a result, this would allow the evolutionary search algorithm to design modules which might be then reused in different edges of the DAG corresponding to the final architecture, which is located at the top level in the hierarchy. Manually designing novel neural architectures is a laborious, time-consuming process. Therefore, exploring new approaches to automatise this task is a problem of great relevance for the field. Overall, the paper is well-written, clear in its exposition and technically sound. While some hyperparameter and design choices could perhaps have been justified in greater detail, the paper is mostly self-contained and provides enough information to be reproducible. The fundamental contribution of this article, when put into the context of the many recent publications on the topic of automatic neural architecture search, is the introduction of a hierarchy of architectures as a way to build the search space. Compared to existing work, this approach should emphasise modularity, making it easier for the evolutionary search algorithm to discover architectures that extensively reuse simpler blocks as part of the model. Exploiting compositionality in model design is not novel per se (e.g. [1,2]), but it is to the best of my knowledge the first explicit application of this idea in neural architecture search. Nevertheless, while the idea behind the proposed approach is definitely interesting, I believe that the experimental results do not provide sufficiently compelling evidence that the resulting method substantially outperforms the non-hierarchical, flat representation of architectures used in other publications. In particular, the results highlighted in Figure 3 and Table 1 seem to indicate that the difference in performance between both paradigms is rather small. Moreover, the performance gap between the flat and hierarchical representations of the search space, as reported in Table 1, remains smaller than the performance gap between the best performing of the approaches proposed in this article and NASNet-A (Zoph et al., 2017), as reported in Tables 2 and 3. Another concern I have is regarding the definition of the mutation operators in Section 3.1. While not explicitly stated, I assume that all sampling steps are performed uniformly at random (otherwise please clarify it). If that was indeed the case, there is a systematic asymmetry between the probability to add and remove an edge, making the former considerably more likely. This could bias the architectures towards fully-connected DAGs, as indeed seems to occur based on the motifs reported in Appendix A. Finally, while the main motivation behind neural architecture search is to automatise the design of new models, the approach here presented introduces a non-negligible number of hyperparameters that could potentially have a considerable impact and need to be selected somehow. This includes, for instance, the number of levels in the hierarchy (L), the number of motifs at each level in the hierarchy (M_l), the number of nodes in each graph at each level in the hierarchy (| G^{(l)} |), as well as the set of primitive operations. I believe the paper would be substantially strengthened if the authors explored how robust the resulting approach is with respect to perturbations of these hyperparameters, and/or provided users with a principled approach to select reasonable values. References: [1] Grosse, Roger, et al. "Exploiting compositionality to explore a large space of model structures." UAI (2012). [2] Duvenaud, David, et al. "Structure discovery in nonparametric regression through compositional kernel search." ICML (2013).
iclr_2018_ryHM_fbA-
This paper proposes a new model for document embedding. Existing approaches either require complex inference or use recurrent neural networks that are difficult to parallelize. We take a different route and use recent advances in language modeling to develop a convolutional neural network embedding model. This allows us to train deeper architectures that are fully parallelizable. Stacking layers together increases the receptive filed allowing each successive layer to model increasingly longer range semantic dependences within the document. Empirically we demonstrate superior results on two publicly available benchmarks. Full code will be released with the final version of this paper.
This paper proposes a new model for the general task of inducing document representations (embeddings). The approach uses a CNN architecture, distinguishing it from the majority of prior efforts on this problem, which have tended to use RNNs. This affords obvious computational advantages, as training may be parallelized. Overall, the model presented is relatively simple (a good thing, in my view) and it indeed seems fast. I can thus see potential practical uses of this CNN based approach to document embedding in future work on language tasks. The training strategy, which entails selecting documents and then indexes within them stochastically, is also neat. Furthermore, the work is presented relatively clearly. That said, my main concerns regarding this paper are that: (1) there's not much new here, and, (2) the experimental setup may be flawed, in that it would seem model hyperparams were tuned for the proposed approach but not for the baselines; I elaborate on these concerns below. Specific comments: --- - It's hard to tease out exactly what's new here: the various elements used are all well known. But perhaps there is merit in putting the specific pieces together. Essentially, the novelty is using a CNN rather than an RNN to induce document embeddings. - In Section 4.1, the authors write that they report results for their after running "parameter sweeps ..." -- I presume that these were performed on a validation set, but the authors should say so. In any case, a very potential weakness here: were analagous parameter sweeps for this dataset performed for the baseline models? It would seem not, as the authors write "the IMDB training data using the default hyper-parameters" for skip-thought. Surely it is unfair comparison if one model has been tuned to a given dataset while others use only the default hyper-parameters? - Many important questions were left unaddressed in the experiments. For example, does one really need to use the gating mechanism borrowed from the Dauphin et al. paper? What happens if not? How big of an effect does the stochastic sampling of document indices have on the learned embeddings? Does the specific underlying CNN architecture affect results, and how much? None of these questions are explored. - I was left a bit confused regarding how the v_{1:i-1} embedding is actually estimated; I think the details here are insufficient in the current presentation. The authors write that this is a "function of all words up to w_{i-1}". This would seem to imply that at test time, prediction is not in fact parallelizable, no? Yet this seems to be one of the main arguments the authors make in favor of the model (in contrast to RNN based methods). In fact, I think the authors are proposing using the (aggregated) filter activation vectors (h^l(x)) in eq. 5, but for some reason this is not made explicit. Minor comments: - In Eq. 4, should the product be element-wise to realize the desired gating (as per the Dauhpin paper)? This should be made explicit in the notation. - On the bottom of page 3, the authors claim "Expanding the prediction to multiple words makes the problem more difficult since the only way to achieve that is by 'understanding' the preceding sequence." This claim should either by made more precise or removed. It is not clear exactly what is meant here, nor what evidence supports it. - Commas are missing in a few. For example on page 2, probably want a comma after "in parallel" (before "significantly"); also after "parallelize" above "Approach". - Page 4: "In contrast, our model addresses only requires" --> drop the "addresses".
iclr_2018_B1bgpzZAZ
The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given {passage, question} pair and select one of the n given options. The current state of the art model for this task first computes a query-aware representation for the passage and then selects the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of elimination and selection. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose ElimiNet, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the {document, question} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our elimination-selection based method with a selection based method gives us an improvement of 7% (relative) over the best reported performance on this dataset.
In this paper, a model is built for reading comprehension with multiple choices. The model consists of three modules: encoder, interaction module and elimination module. The major contributions are two folds: firstly, proposing the interesting option elimination problem for multi-step reading comprehension; and secondly, proposing the elimination module where a eliminate gate is used to select different orthogonal factors from the document representations. Intuitively, one answer option can be viewed as eliminated if the document representation vector has its factor along the option vector ignored. The elimination module is interesting, but the usefulness of “elimination” is not well justified for two reasons. First, the improvement of the proposed model over the previous state of the art is limited. Second, the model is built upon GAR until the elimination module, then according to Table 1 it seems to indicate that the elimination module does not help significantly (0.4% improvement). In order to show the usefulness of the elimination module, the model should be exactly built on the GAR with an additional elimination module (i.e. after removing the elimination module, the performance should be similar to GAR but not something significantly worse with a 42.58% accuracy). Then we can explicitly compare the performance between GAR and the GAR w/ elimination module to tell how much the new module helps. Other issues: 1) Is there any difference to directly use $x$ and $h^z$ instead of $x^e$ and $x^r$ to compute $\tilde{x}_i$? Even though the authors find the orthogonal vectors, they’re gated summed together very soon. It would be better to show how much “elimination” and “subtraction” effect the final performance, besides the effect of subtraction gate. 2) A figure showing the model architecture and the corresponding QA process will better help the readers understand the proposed model. 3) $c_i$ in page 5 is not defined. What’s the performance of only using $s_i$ for answer selection or replacing $x^L$ with $s_i$ in score function? 4) It would be better to have the experiments trained with different $n$ to show how multi-hop effects the final performance, besides the case study in Figure 3. Minor issues: 1) In Eqn. (4), it would be better to use a vector as the input of softmax. 2) It would be easier for discussion if the authors could assign numbers to every equation.
iclr_2018_rJL6pz-CZ
Within-class variation in a high-dimensional dataset can be modeled as being on a low-dimensional manifold due to the constraints of the physical processes producing that variation (e.g., translation, illumination, etc.). We desire a method for learning a representation of the manifolds induced by identity-preserving transformations that can be used to increase robustness, reduce the training burden, and encourage interpretability in machine learning tasks. In particular, what is needed is a representation of the transformation manifold that can robustly capture the shape of the manifold from the input data, generate new points on the manifold, and extend transformations outside of the training domain without significantly increasing the error. Previous work has proposed algorithms to efficiently learn analytic operators (called transport operators) that define the process of transporting one data point on a manifold to another. The main contribution of this paper is to define two transfer learning methods that use this generative manifold representation to learn natural transformations and incorporate them into new data. The first method uses this representation in a novel randomized approach to transfer learning that employs the learned generative model to map out unseen regions of the data space. These results are shown through demonstrations of transfer learning in a data augmentation task for few-shot image classification. The second method use of transport operators for injecting specific transformations into new data examples which allows for realistic image animation and informed data augmentation. These results are shown on stylized constructions using the classic swiss roll data structure and in demonstrations of transfer learning in a data augmentation task for few-shot image classification. We also propose the use of transport operators for injecting transformations into new data examples which allows for realistic image animation.
Overview: The paper aim to model non-linear, intrinsically low-dimensional structure, in data by estimating "transport operators" that predict how points move along the manifold. This is an old idea, and the stated contribution of the paper is: "The main contribution of this paper is to show that the manifold representation learned in the transport operators is valuable both as a probabilistic model to improve general machine learning tasks as well as for performing transfer learning in classification tasks." The paper provide nice illustrative experiments arguing why transport operators may be a useful modeling tool, but does not go beyond illustrative experiments. While I follow the intuitions behind transport operators I am doubtful if they will generalize beyond very simple manifold structures (see detailed comments below). Quality: The paper is well-written and fairly easy to follow. In particular, I appreciate that the authors make no attempt to overclaim contributions. From a methodology point-of-view, the paper has limited novelty (transport operators, and learning thereof has been studied elsewhere), but there are some technical insights (likelihood model, use in data augmentation). Since the provided experiments are mostly illustrations, I would argue that the significance of the paper is limited. I'd say that to really convince a broader audience that this old idea is worth revisiting, the work must go beyond illustrations and apply to a real data problem. Detailed Comments and Questions: *) Equation 1 of the paper describe the key dynamics of the applied transport operators. Basically, the paper assume that the underlying data manifold is locally governed by a linear differential equation. This is a very suitable assumption, e.g., for the swiss roll data set, but it is unclear to this reader why it is a suitable assumption beyond such toy data. I would very much appreciate a detailed discussion of when this is a suitable modeling choice, and when it is not. My intuition is that this is mostly a suitable model when the data manifold appears due to simple transformations (e.g. rotations) of data. This is also exactly the type of data considered in the paper. *) In Eq. 3, should it be "expm" instead of "exp" ? *) The first two paragraphs of Sec. 2 are background material, whereas paragraph 3 and beyond describe material that is key to the paper. I would recommend introducing a \subsection (or something like it) to make this more clear. *) The idea of working with transformations of data rather than the actual data is the corner-stone of Ulf Grenander's renowned "Pattern Theory". A citation to this seminal work would be appropriate. *) In the first paragraph of the introduction links are drawn to the neuroscience literature; it would be appropriate to cite a suitable publication. Pros(+) & Cons(-): + Well-written. + Good illustrative experiments. - Real-life experiments are lacking. - Limited methodology contribution. - The assumed dynamics might be too simplistic (at least a discussing of this is missing). For the AC: The submitted paper acknowledges several grants (including grant numbers), which can directly be tied to the authors identity. This may be a violation of the double blind review policy. I did not use this information to determine the authors identity, though, so this review is still double blind. Post-rebuttal comments: The paper has improved with the incorporated revisions, but my main concerns remain. I find the Swiss Roll / rotated-USPS examples to be too contrived as the dynamics are exactly tailored to the linear ODE assumption. These are examples where the model assumptions are perfect. What is unclear is how the model behaves when the linear ODE assumption is not-quite-correct-but-also-not-totally-incorrect, i.e. how the model behaves in real life. I didn't get that from the newly added experiment. So, I'll keep my rating as is.
iclr_2018_S1Q79heRW
Entailment vectors are a principled way to encode in a vector what information is known and what is unknown. They are designed to model relations where one vector should include all the information in another vector, called entailment. This paper investigates the unsupervised learning of entailment vectors for the semantics of words. Using simple entailment-based models of the semantics of words in text (distributional semantics), we induce entailment-vector word embeddings which outperform the best previous results for predicting entailment between words, in unsupervised and semi-supervised experiments on hyponymy.
This work proposes to learn word vectors that are intended to specifically model the lexical entailment relationship. This is achieved in an unsupervised manner from unstructured data, through an approach heavily influenced by recent work by Henderson and Popa, which "reinterprets word2vec" by modeling distributions over discrete latent "pseudo-phrase" vectors. That is, instead of using two vectors per word, as in word2vec, a latent representation is introduced that models the joint properties of the target and context words. While Henderson and Popa represent the latent vector as the evidence for the target and context, or the likelihood, this work suggests to represent it based on the posterior distribution instead. The resultant representations are evaluated on Weeds et al.'s (2014) version of BLESS, as well as the full BLESS dataset, where they do better than the original. The paper is confusingly written, fails to mention a lot of related work, has a weak evaluation where it doesn't compare to related systems, and I feel that it would benefit from "toning down". Hence, I do not recommend it for acceptance. In more detail: 1. The idea behind Henderson and Popa's model, as well as the suggested modification, should be easy to explain, but I really had to struggle to make sense of it. This work relies very heavily on that paper, and would be better off if it was more standalone. I think part of the confusion stems from using y for the latent representation but not specifying whether it is a word or latent representation in Equation 1 - that only becomes obvious later. The exposition clearly needs more work, and more precise technical writing. 2. There is a lot of related work around word embeddings that is not mentioned, both on word2vec-style representation learning (e.g. it would be useful to relate this more to word2vec and what it learns, as in Omer Levy's work on "interpreting" word2vec, rather than reinterpreting) and word embeddings on hypernymy detection and lexical entailment (see e.g. Stephen Roller's thesis for references). 3. There has been a lot of work on the Weeds BLESS dataset that is not mentioned, or compared against, including unsupervised approaches (e.g. Levy's work, Santus's work, Kiela's work, Roller's work, etc.), that perform better than the numbers in Table 1. There are many other datasets that measure lexical entailment, none of which are evaluated on (apart from the original BLESS set, which is mentioned in passing). It would make sense to show that the method works on more than one dataset, and to do a thorough comparison against other work; especially given that: 4. The tone of the work appears to imply that word2vec was wrong and needs to be reinterpreted: the work leads to "unprecedented results" (not true), claims to be a completely novel method for inducing word representations (together with LSA, BOW and Word2Vec, third paragraph; not true), and suggests it has found "the best way to extract information about the semantics of a word from this model" (7th paragraph; not true). This, together with the "reinterpretation of word2vec" and the proposed "new distributional semantic models" almost makes it hard for me to take the work seriously.
iclr_2018_H1a37GWCZ
We present a new unsupervised method for learning general-purpose sentence embeddings. Unlike existing methods which rely on local contexts, such as words inside the sentence or immediately neighboring sentences, our method selects, for each target sentence, influential sentences in the entire document based on a document structure. We identify a dependency structure of sentences using metadata or text styles. Furthermore, we propose a novel out-of-vocabulary word handling technique to model many domain-specific terms, which were mostly discarded by existing sentence embedding methods. We validate our model on several tasks showing 30% precision improvement in coreference resolution in a technical domain, and 7.5% accuracy increase in paraphrase detection compared to baselines.
This paper extends the idea of forming an unsupervised representation of sentences used in the SkipThought approach by using a broader set of evidence for forming the representation of a sentence. Rather than simply encoding the preceding sentence and then generating the next sentence, the model suggests that a whole bunch of related "sentences" could be encoded, including document title, section title, footnotes, hyperlinked sentences. This is a valid good idea and indeed improves results. The other main new and potentially useful idea is a new idea for handling OOVs in this context where they are represented by positional placeholder variables. This also seems helpful. The paper is able to show markedly better results on paraphrase detection that skipthought and some interesting and perhaps good results on domain-specific coreference resolution. On the negative side, the model of the paper isn't very excitingly different. It's a fairly straightforward extension of the earlier SkipThought model to a situation where you have multiple generators of related text. There isn't a clear evaluation that shows the utility of the added OOV Handler, since the results with and without that handling aren't comparable. The OOV Handler is also related to positional encoding ideas that have been used in NMT but aren't reference. And the coreference experiment isn't that clearly described nor necessarily that meaningful. Finally, the finding of dependencies between sentences for the multiple generators is done in a rule-based fashion, which is okay and works, but not super neural and exciting. Other comments: - p.3. Another related sentence you could possibly use is first sentence of paragraph related to all other sentences? (Works if people write paragraphs with a "topic sentence" at the beginning. - p.5. Notation seemed a bit non-standard. I thought most people use \sigma for a sigmoid (makes sense, right?), whereas you use it for a softmax and use calligraphic S for a sigmoid.... - p.5. Section 5 suggests the standard way to do OOVs is to average all word vectors. That's one well-know way, but hardly the only way. A trained UNK encoding and use of things like character-level encoders is also quite common. - p.6. The basic idea of the OOV encoder seems a good one. In domain specific contexts, you want to be able to refer to and re-use words that appear in related sentences, since they are likely to appear again and you want to be able to generate them. A weakness of this section however is that it makes no reference to related work whatsoever. It seems like there's quite a bit of related work. The idea of using a positional encoding so that you can generate rare words by position has previously been used in NMT, e.g. Luong et al. (Google brain) (ACL 2015). More generally, a now quite common way to handle this problem is to use "pointing" or "copying", which appears in a number of papers. (e.g., Vinyals et al. 2015) and might also have been used here and might be expected to work too. - p.7. Why such an old Wikipedia dump? Most people use a more recent one! - p.7. The paraphrase results seem good and prove the idea works. It's a shame they don't let you see the usefulness of the OOV model. - p.8. For various reasons, the coreference results seem less useful than they could have been, but they do show some value for the technique in the area of domain-specific coreference.
iclr_2018_HylgYB3pZ
LINEARLY CONSTRAINED WEIGHTS: RESOLVING THE VANISHING GRADIENT PROBLEM BY REDUCING AN- GLE BIAS In this paper, we first identify angle bias, a simple but remarkable phenomenon that causes the vanishing gradient problem in a multilayer perceptron (MLP) with sigmoid activation functions. We then propose linearly constrained weights (LCW) to reduce the angle bias in a neural network, so as to train the network under the constraints that the sum of the elements of each weight vector is zero. A reparameterization technique is presented to efficiently train a model with LCW by embedding the constraints on weight vectors into the structure of the network. Interestingly, batch normalization (Ioffe & Szegedy, 2015) can be viewed as a mechanism to correct angle bias. Preliminary experiments show that LCW helps train a 100-layered MLP more efficiently than does batch normalization.
This paper studies the impact of angle bias on learning deep neural networks, where angle bias is defined to be the expected value of the inner product of a random vectors (e.g., an activation vector) and a given vector (e.g., a weight vector). The angle bias is non-zero as long as the random vector is non-zero in expectation and the given vector is non-zero. This suggests that the some of the units in a deep neural network have large values (either positive or negative) regardless of the input, which in turn suggests vanishing gradient. The proposed solution to angle bias is to place a linear constraint such that the sum of the weight becomes zero. Although this does not rule out angle bias in general, it does so for the very special case where the expected value of the random vector is a vector consisting of a common value. Nevertheless, numerical experiments suggest that the proposed approach can effectively reduce angle bias and improves the accuracy for training data in the CIFAR-10 task. Test accuracy is not improved, however. Overall, this paper introduces an interesting phenomenon that is worth studying to gain insights into how to train deep neural networks, but the results are rather preliminary both on theory and experiments. On the theoretical side, the linearly constrained weights are only shown to work for a very special case. There can be many other approaches to mitigate the impact of angle bias. For example, how about scaling each variable in a way that the mean becomes zero, instead of scaling it into [-1,+1] as is done in the experiments? When the mean of input is zero, there is no angle bias in the first layer. Also, what about if we include the bias term so that b + w a is the preactivation value? On the experimental side, it has been shown that linearly constrained weights can mitigate the impact of angle bias on vanishing gradient and can reduce the training error, but the test error is unfortunately increased for the particular task with the particular dataset in the experiments. It would be desirable to identify specific tasks and datasets for which the proposed approach outperforms baselines. It is intuitively expected that the proposed approach has some merit in some domains, but it is unclear exactly when and where it is. Minor comments: In Section 2.2, is Layer 1 the input layer or the next?
iclr_2018_rkYTTf-AZ
Published as a conference paper at ICLR 2018 UNSUPERVISED MACHINE TRANSLATION USING MONOLINGUAL CORPORA ONLY Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.
The authors present an approach for unsupervised MT which uses a weighted loss function containing 3 components: (i) self reconstruction (ii) cross reconstruction and (iii) adversarial loss. The results are interesting (but perhaps less interesting than what is hinted in the abstract). 1) In the abstract the authors mention that they achieve a BLEU score of 32.8 but omit the fact that this is only on Multi30K dataset and not on the more standard WMT datasets. At first glance, most people from the field would assume that this is on the WMT dataset. I request the authors to explicitly mention this in the abstract itself (there is clearly space and I don't see why this should be omitted) 2) In section 2.3, the authors talk about the Noise Model which is inspired by the standard Denoising Autoencoder setup. While I understand the robustness argument in the case of AEs I am not convinced that the same applies to languages. Such random permutations can often completely alter the meaning of the sentence. The ablation test seems to suggest that this process helps. I read another paper which suggests that this noise does not help (which intuitively makes sense). I would like the authors to comment on this (of course, I am not asking you to compare with the other paper but I am just saying that I have read contradicting observations - one which seems intuitive and the other does not). 3) How were the 3 lambdas in Equation 3 selected ? What ranges did you consider. The three loss terms seem to have very different ranges. How did you account for that? 4) Clarification: In section 2.5 what exactly do you mean by "as long as the two monolingual corpora exhibit strong structure in feature space." How do you quantify this ? 5) In section 4.1, can you please mention the exact number of sentences that you sampled from WMT'14. You mention that selected sentences from 15M random pairs but how many did you select ? The caption of one of the figure mentions that there were 10M sentences. Just want to confirm this. 6) The improvements are much better on the Multi30k dataset. I guess this is because this dataset has smaller sentences with smaller vocabulary. Can you provide a table comparing the average number of sentences and vocabulary size of the two datasets (Multi30k and WMT). 7) The ablation results are provided only for the Multi30k dataset. Can you provide similar results for the WMT dataset. Perhaps this would help in answering my query in point (2) above. 8) Can you also check the performance of a PBSMT system trained on 100K parallel sentences? Although NMT outperforms PBSMT when the data size is large, PBSMT might still be better suited for low resource settings. 9) There are some missing citations (already pointed by others in the forum) . Please add those. +++++++++++++++++++++++ I have noted the clarifications posted by the authors. I still have concerns about a couple of things. For example, I am still not convinced about the justification given for word order. I understand that empirically it works better but I don't get the intuition. Similarly, I don't get the argument about "strong structure in feature space". This is just a conjecture and it is very hard to measure it. I would request the authors to not emphasize on it or give a different more grounded intuition. I do acknowledge the efforts put in by the authors to address some of my comments and for that I would like to change my rating a bit.
iclr_2018_rJiaRbk0-
Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. Its goal is to use gates to control the information flow (e.g., whether to skip some information/transformation or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal and is easy to overfit. In this paper, we propose a new way for LSTM training, which pushes the values of the gates towards 0 or 1. By doing so, we can (1) better control the information flow: the gates are mostly open or closed, instead of in a middle state; and (2) avoid overfitting to certain extent: the gates operate at their flat regions, which is shown to correspond to better generalization ability. However, learning towards discrete values of the gates is generally difficult. To tackle this challenge, we leverage the recently developed GumbelSoftmax trick from the field of variational methods, and make the model trainable with standard backpropagation. Experimental results on language modeling and machine translation show that (1) the values of the gates generated by our method are more reasonable and intuitively interpretable, and (2) our proposed method generalizes better and achieves better accuracy on test sets in all tasks. Moreover, the learnt models are not sensitive to low-precision approximation and low-rank approximation of the gate parameters due to the flat loss surface.
This paper propose a new "gate" function for LSTM to enable the values of the gates towards 0 or 1. The motivation behind is a flat region of the loss surface is likely to generalize well. It shows the experimental results are comparable or better than vanilla LSTM and much more robust to low-precision approximation and low-rank approximation. In section 3.2, the paper claimed using a smaller temperature cannot guarantee the outputs to be close to the boundary. Is there any experimental evidence to show it's not working? It also claimed pushing output gate to 0/1 will drop the performance. It actually quite interesting because there are bunch of paper claimed output gate is not important for language modeling, e.g. https://openreview.net/pdf?id=HJOQ7MgAW . In the sensitive analysis, what if apply rounding / low-rank for all the parameters? How was this approach compare to binarynet https://arxiv.org/abs/1602.02830 ? Applying the same idea, but only for forget gate/ input gate. Also, can we apply this idea to the binarynet? Overall, I think it's an interesting paper but I feel it should compare with some simple baseline to binarized the gate function. Updates: Thanks a lot for all the clarification. It do improve the paper quality but I'm still thinking it's higher than "6" but lower than "7". To me, improve ppl from "52.8" to "52.1" isn't very significant. For WMT, it improve on DE->EN but not for EN->DE (although it improve both for the author's own baseline). So I'm not fully convinced this approach could improve the generalization. But I feel this work can have many other applications such as "binarynet".
iclr_2018_HyKZyYlRZ
Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.
The paper presents a multi-task, multi-domain model based on deep neural networks. The proposed model is able to take inputs from various domains (image, text, speech) and solves multiple tasks, such as image captioning, machine translation or speech recognition. The proposed model is composed of several features learning blocks (one for each input type) and of an encoder and an auto-regressive decoder, which are domain-agnostic. The model is evaluated on 8 different tasks and is compared with a model trained separately on each task, showing improvements on each task. The paper is well written and easy to follow. The contributions of the paper are novel and significant. The approach of having one model able to perform well on completely different tasks and type of input is very interesting and inspiring. The experiments clearly show the viability of the approach and give interesting insights. This is surely an important step towards more general deep learning models. Comments: * In the introduction where the 8 databases are presented, the tasks should also be explained clearly, as several domains are involved and the reader might not be familiar with the task linked to each database. Moreover, some databases could be used for different tasks, such as WSJ or ImageNet. * The training procedure of the model is not explained in the paper. What is the cost function and what is the strategy to train on multiple tasks ? The paper should at least outline the strategy. * The experiments are sufficient to demonstrate the viability of the approach, but the experimental setup is not clear. Specifically, there is an issue about the speech recognition part of the experiment. It is not clear what the task exactly is: continuous speech recognition, isolated word recognition ? The metrics used in Table 1 are also not clear, they should be explained in the text. Also, if the task is continuous speech recognition, the WER (word error rate) metric should be used. Information about the detailed setup is also lacking, specifically which test and development sets are used (the WSJ corpus has several sets). * Using raw waveforms as audio modality is very interesting, but this approach is not standard for speech recognition, some references should be provided, such as: P. Golik, Z. Tuske, R. Schluter, H. Ney, Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015, pp. 26–30. D. Palaz, M. Magimai Doss and R. Collobert, (2015, April). Convolutional neural networks-based continuous speech recognition using raw speech signal. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 4295-4299). IEEE. T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, and O. Vinyals. Learning the Speech Front-end With Raw Waveform CLDNNs. Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015. Revised Review: The main idea of the paper is very interesting and the work presented is impressive. However, I tend to agree with Reviewer2, as a more comprehensive analysis should be presented to show that the network is not simply multiplexing tasks. The experiments are interesting, except for the WSJ speech task, which is almost meaningless. Indeed, it is not clear what the network has learned given the metrics presented, as the WER on WSJ should be around 5% for speech recognition. I thus suggest to either drop the speech experiment, or the modify the network to do continuous speech recognition. A simpler speech task such as Keyword Spotting could also be investigated.
iclr_2018_ry831QWAb
In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization. The technique essentially contains two consecutive steps in each iteration: 1) computing and normalizing each block (layer) of the mini-batch stochastic gradient; 2) selecting appropriate step size to update the decision variable (parameter) towards the negative of the block-normalized gradient. We conduct extensive empirical studies on various non-convex neural network optimization problems, including multi layer perceptron, convolution neural networks and recurrent neural networks. The results indicate the blocknormalized gradient can help accelerate the training of neural networks. In particular, we observe that the normalized gradient methods having constant step size with occasionally decay, such as SGD with momentum, have better performance in the deep convolution neural networks, while those with adaptive step sizes, such as Adam, perform better in recurrent neural networks. Besides, we also observe this line of methods can lead to solutions with better generalization properties, which is confirmed by the performance improvement over strong baselines.
This paper proposes a family of first-order stochastic optimization schemes based on (1) normalizing (batches of) stochastic gradient descents and (2) choosing from a step size updating scheme. The authors argue that iterative first-order optimization algorithms can be interpreted as a choice of an update direction and a step size, so they suggest that one should always normalize the gradient when computing the direction and then choose a step size using the normalized gradient. The presentation in the paper is clear, and the exposition is easy to follow. The authors also do a good job of presenting related work and putting their ideas in the proper context. The authors also test their proposed method on many datasets, which is appreciated. However, I didn't find the main idea of the paper to be particularly compelling. The proposed technique is reasonable on its own, but the empirical results do not come with any measure of statistical significance. The authors also do not analyze the sensitivity of the different optimization algorithms to hyperparameter choice, opting to only use the default. Moreover, some algorithms were used as benchmarks on some datasets but not others. For a primarily empirical paper, every state-of-the-art algorithm should be used as a point of comparison on every dataset considered. These factors altogether render the experiments uninformative in comparing the proposed suite of algorithms to state-of-the-art methods. The theoretical result in the convex setting is also not data-dependent, despite the fact that it is the normalized gradient version of AdaGrad, which does come with a data-dependent convergence guarantee. Given the suite of optimization algorithms in the literature and in use today, any new optimization framework should either demonstrate improved (or at least matching) guarantees in some common (e.g. convex) settings or definitively outperform state-of-the-art methods on problems that are of widespread interest. Unfortunately, this paper does neither. Because of these points, I do not feel the quality, originality, and significance of the work to be high enough to merit acceptance. Some specific comments: p. 2: "adaptive feature-dependent step size has attracted lots of attention". When you apply feature-dependent step sizes, you are effectively changing the direction of the gradient, so your meta learning formulation, as posed, doesn't make as much sense. p. 2: "we hope the resulting methods can benefit from both techniques". What reason do you have to hope for this? Why should they be complimentary? Existing optimization techniques are based on careful design and coupling of gradients or surrogate gradients, with specific learning rate schedules. Arbitrarily mixing the two doesn't seem to be theoretically well-motivated. p. 2: "numerical results shows that normalized gradient always helps to improve the performance of the original methods when the network structure is deep". It would be great to provide some intuition for this. p. 2: "we also provide a convergence proof under this framework when the problem is convex and the stepsize is adaptive". The result that you prove guarantees a \theta(\sqrt{T}) convergence rate. On the other hand, the AdaGrad algorithm guarantees a data-dependent bound that is O(\sqrt{T}) but can also be much smaller. This suggests that there is no theoretical motivation to use NGD with an adaptive step size over AdaGrad. p. 2-3: "NGD can find a \eps-optimal solution....when the objective function is quasi-convex. ....extended NGD for upper semi-continuous quasi-convex objective functions...". This seems like a typo. How are results that go from quasi-convex to upper semi-continuous quasi-convex an extension? p. 3: There should be a reference for RMSProp. p. 3: "where each block of parameters x^i can be viewed as parameters associated to the ith layer in the network". Why is layer parametrization (and later on normalization) a good way idea? There should be either a reference or an explanation. p. 4: "x=(x_1, x_2, \ldots, x_B)". Should these subscripts be superscripts? p. 4: "For all the algorithms, we use their default settings." This seems insufficient for an empirical paper, since most problems often involve some amount of hyperparameter tuning. How sensitive is each method to the choice of hyperparameters? What about the impact of initialization? p. 4-8: None of the experimental results have error bars or any measure of statistical significance. p. 5: "NG... is a variant of the NG_{UNIT} method". This method is never motivated. p. 5-6: Why are SGD and Adam used for MNIST but not on CIFAR? p. 5: "we chose the best heyper-paerameter from the 56 layer residual network." Apart from the typos, are these parameters chosen from the training set or the test set? p. 6: Why isn't Adam tested on ImageNet? POST AUTHOR RESPONSE: After reading the author response and taking into account the fact that the authors have spent the time to add more experiments and clarify their theoretical result, I have decided to upgrade my score from a 3 to a 4. However, I still do not feel that the paper is up to the standards of the conference.
iclr_2018_HkwZSG-CZ
Published as a conference paper at ICLR 2018 BREAKING THE SOFTMAX BOTTLENECK: A HIGH-RANK RNN LANGUAGE MODEL We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points in perplexity.
The authors argue in this paper that due to the limited rank of the context-to-vocabulary logit matrix in the currently used version of the softmax output layer, it is not able to capture the full complexity of language. As a result, they propose to use a mixture of softmax output layers instead where the mixing probabilities are context-dependent, which allows to obtain a full rank logit matrix in complexity linear in the number of mixture components (here 15). This leads to improvements in the word-level perplexities of the PTB and wikitext2 data sets, and Switchboard BLEU scores. The question of the expressiveness of the softmax layer, as well as its suitability for word-level prediction, is indeed an important one which has received too little attention. This makes a lot of the questions asked in this paper extremely relevant to the field. However, it is unclear that the rank of the logit matrix is the right quantity to consider. For example, it is easy to describe a rank D NxM matrix where up to 2^D lines have max values at different indices. Further, the first two "observations" in Section 2.2 would be more accurately described as "intuitions" of the authors. As they write themselves "there is no evidence showing that semantic meanings are fully linearly correlated." Why then try to link "meanings" to basis vectors for the rows of A? To be clear, the proposed model is undoubtedly more expressive than a regular softmax, and although it does come at a substantial computational cost (a back-of-the envelope calculation tells us that computing 15 components of 280d MoS takes the same number of operations as one with dimension 1084 = sqrt (280*280*15)), it apparently manages not to drastically increase overfitting, which is significant. Unfortunately, this is only tested on relatively small data sets, up to 2M tokens and a vocabulary of size 30K for language modeling. They do constitute a good starting place to test a model, but given the importance of regularization on those specific tasks, it is difficult to predict how the MoS would behave if more training data were available, and if one could e.g. simply try a 1084 dimension embedding for the softmax without having to worry about overfitting. Another important missing experiment would consist in varying the number of mixture components (this could very well be done on WikiText2). This could help validate the hypothesis: how does the estimated rank vary with the number of components? How about the performance and pairwise KL divergence? This paper offers a promising direction for language modeling research, but would require more justification, or at least a more developed experimental section. Pros: - Important starting question - Thought-provoking approach - Experimental gains on small data sets Cons: - The link between the intuition and reality of the gains is not obvious - Experiments limited to small data sets, some obvious questions remain
iclr_2018_Bya8fGWAZ
We present Value Propagation (VProp), a parameter-efficient differentiable planning module built on Value Iteration which can successfully be trained in a reinforcement learning fashion to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We evaluate on configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes. Furthermore, we show that the module enables to learn to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems.
ORIGINALITY & SIGNIFICANCE The authors build upon value iteration networks: the idea that the value function can be computed efficiently from rewards and transitions using a dedicated convolutional network. The authors point out that the original "value iteration network” (Tamar 2016) did not handle non-stationary dynamics models or variable size problems well and propose a new formulation to extend the model to this case which they call a value propagation network. It seems useful and practical to compute value iteration explicitly as this will propagate values for us without having to learn the propagated form through extensive gradient update steps. Extending to the scenario of non-stationary dynamics is important to make the idea applicable to common problems. The work is therefore original and significant. The algorithm is evaluated on the original obstacle grids from Tamar 2016 and larger grids generated to test scalability. The authors Prop and MVProp are able to solve the grids with much higher reliability at the end of training and converge much faster. The M in MVProp in particular seems to be very useful in scaling up to the large grids. The authors also show that the algorithm handles non-stationary dynamics in an avalanche task where obstacles can fall over time. QUALITY The symbol d_{rew} is never defined — what does “new” stand for? It appears to be the number of latent convolutional filters or channels generated by the state embedding network. Section 2.2 Sentence 2: The final layer representing the encoding is given as ( R^{d_rew x d_x x d_y }. Based on the description in the first paragraph of section 2, it sounds like d_rew might be the number of channels or filters in the last convolutional layer. In equation 1, it wasn’t obvious to me that the expression max_a q_{ij}^{k-1} q^{k} corresponds to an actual operation? The h( \Phi( x ), v^{k-1} ) sort of makes sense … value is only calculated with respect to only the observation of the maze obstacles but the policy \pi is calculated with respect to the joint observation and agent state. The expression h_{aid} ( \phi(0), v ) = < Wa, [ \phi(o) ; v ] > + b makes sense and reminds me of the Value Iteration network work where we take the previous value function, combine it with the reward function and use convolution to compute the expectation (the weights Wa encode the effect of transitions). I gather the tensor Wa = R^{|A| x (d_{rew} x d_x x d_y } both converts the feature embedding \phi{o} to rewards and represents the transition / propagation of reward across states due to transitions and discounts at the same time? I didn’t understand the r^in, r&out representation in section 4.1. These are given by the domain? I did get the overall idea of efficiently creating a local value function in the neighborhood of the current state and passing this to the policy so that it can make a local decision. A bit more detail defining terms, explaining their intuitive role and how the output of one module feeds into the next would be helpful. POST REVISION COMMENTS: - I didn't reread the whole thing - just used the diff tool. - It looks like the typos in the equations got fixed - The new phrase "enables to learn to plan" seems pretty awkward
iclr_2018_SJ1nzBeA-
MULTI-TASK LEARNING FOR DOCUMENT RANKING AND QUERY SUGGESTION We propose a multi-task learning framework to jointly learn document ranking and query suggestion for web search. It consists of two major components, a document ranker and a query recommender. Document ranker combines current query and session information and compares the combined representation with document representation to rank the documents. Query recommender tracks users' query reformulation sequence considering all previous in-session queries using a sequence to sequence approach. As both tasks are driven by the users' underlying search intent, we perform joint learning of these two components through session recurrence, which encodes search context and intent. Extensive comparisons against state-of-the-art document ranking and query suggestion algorithms are performed on the public AOL search log, and the promising results endorse the effectiveness of the joint learning framework.
The work is interesting and novel. The novelty lies not in the methods used (existing methods are used), but in the way these methods are combined to solve two problems (that so far have been treated separately in IR) simultaneously. The fitness of the proposed architecture and methodological choices to the task at hand is sufficiently argued. The experimental evaluation is not the strongest, in terms of datasets and evaluation measures. While I understand why the AOL dataset was used, the document ranking experiments should also include runs on any of the conventional TREC datasets of documents, queries and actual (not simulated) relevance assessments. Simulating document relevance from clicks is a good enough approximation, but why not also use datasets with real human relevance assessments, especially since so many of them exist and are so easy to access? When evaluating ranking, MAP and NDCG are indeed two popular measures. But the choice of NDCG@1,3,10 seems a bit adhoc. Why not NDCG@5? Furthermore, as the aim seems to be to assess early precision, why not also report MRR? The paper reports that the M-NSRF query suggestion method outperforms all baselines. This is not true. Table 2 shoes that M-NSRF is best for BLEU-1/2, but not for BLEU-3/4. Three final points: - Out of the contributions enumerated at the end of Section 1, only the novel model and the code & data release are contributions. The rigorous comparison to soa and its detailed analysis are the necessary evaluation parts of any empirical paper. - The conclusion states that this work provides useful intuitions about the advantages of multi-task learning involving deep neural networks for IR tasks. What are these? Where were they discussed? They should be outlined here, or referred to somehow. - Although the writing is coherent, there are a couple of recurrent English language mistakes (e.g. missing articles). The paper should be proofread and corrected.
iclr_2018_HkwBEMWCZ
Published as a conference paper at ICLR 2018 SKIP CONNECTIONS ELIMINATE SINGULARITIES Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the "ghosts" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.
This paper proposes to explain the benefits of skip connections in terms of eliminating the singularities of the loss function. The discussion is largely based on a sequence of experiments, some of which are interesting and insightful. The discussion here can be useful for other researchers. My main concern is that the result here is purely empirical, with no concrete theoretical justification. What the experiments reveal is an empirical correlation between the Eigval index and training accuracy, which can be caused by lots of reasons (and cofounders), and does not necessarily establish a causal relation. Therefore, i found many of the discussion to be questionable. I would love to see more solid theoretical discussion to justify the hypothesis proposed in this paper. Do you have a sense how accurate is the estimation of the tail probabilities of the eigenvalues? Because the whole paper is based on the approximation of the eigval indexes, it is critical to exam the estimation is accurate enough to draw the conclusions in the paper. All the conclusions are based on one or two datasets. Could you consider testing the result on more different datasets to verify if the results are generalizable?
iclr_2018_r1pW0WZAW
Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem using nearlyadditive connections between states, as introduced by long short-term memory (LSTM). We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past. We show that MIST RNNs 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and 3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies.
The presented MIST architecture certainly has got its merits, but in my opinion is not very novel, given the fact that NARX RNNs have been described 20 years ago, and Clockwork RNNs (which, as the authors point out in section 2, have a similar structure) have also been in use for several years. Still, the presented results are good, with standard LSTMs being substantially outperformed in three out of five standard RNN/LSTM benchmark tasks. The analysis in section 3 is decent (see however the minor comments below), but does not offer revolutionary new insights - it's perhaps more like a corollary of previous work (Pascanu et al., 2013). Regarding the concrete results, I would have wished for a more detailed analysis of the more surprising results, in particular, for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it "difficult to learn long-term behavior that must be detected at high frequency" [section 2]? How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)? In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units? In summary, for me this paper is solid, and although the architecture is not that new, it is worth bringing it again into the focus of attention. Minor comments: - In several places, the formulas are rather strange and/or occasionally incorrect. In particular, * on the right-hand sind of the inline formula in section 3.1, the symbol v is missing completely, which cannot be right; * in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined; * the \theta_l in the beginning of section 3.3 (formula 13) is completely superfluous. - The position of the tables and figures is rather weird, making the paper less readable than necessary. The authors should consider moving floating parts around (one could also move figure three to the bottom of a suitable page, for example). - It is a matter of taste, but since all experimental results except the ones on the copy task are tabulated, one could think of adding a table with the results now contained in figure 3. Relation to prior work: the authors are aware of most relevant work. On p2 they write: "Many other approaches have also been proposed to capture long-term dependencies." There is one that seems close to what the authors do: J. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992 It is related to clockwork RNNs, about which the authors write: "A recent architecture that is similar in spirit to our work is that of Clockwork RNNs (Koutnik et al., 2014), which split weights and hidden units into partitions, each with a distinct period. When it’s not a partition’s time to tick, its hidden units are passed through unchanged, thus in some ways mimicking the behavior of NARX RNNs. However Clockwork RNNs differ in two key ways. First, Clockwork RNNs sever high-frequency-to-low-frequency paths, thus making it difficult to learn long-term behavior that must be detected at high frequency (for example, learning to depend on quick motions from the past for activity recognition). Second, Clockwork RNNs require hidden units to be partitioned a priori, which in practice is difficult to do in any meaningful way. NARX RNNs suffer from neither of these drawbacks." The neural history compressor, however, adapts to the frequency of unexpected events, by ticking only when there is an unpredictable event, thus overcoming some of the issues above. Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks? General recommendation: Accept, provided the comments are taken into account.
iclr_2018_Syhr6pxCW
Published as a conference paper at ICLR 2018 PIXELNN: EXAMPLE-BASED IMAGE SYNTHESIS We present a simple nearest-neighbor (NN) approach that synthesizes highfrequency photorealistic images from an "incomplete" signal such as a lowresolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags.
This paper presents a pixel-matching based approach to synthesizing RGB images from input edge or normal maps. The approach is compared to Isola et al’s conditional adversarial networks, and unlike the conditional GAN, is able to produce a diverse set of outputs. Overall, the paper describes a computer visions system based on synthesizing images, and not necessarily a new theoretical framework to compete with GANs. With the current focus of the paper being the proposed system, it is interesting to the computer vision community. However, if one views the paper in a different light, namely showing some “blind-spots” of current conditional GAN approaches like lack of diversity, then it can be of much more interest to the broader ICLR community. Pros: Overall the paper is well-written Makes a strong case that random noise injection inside conditional GANs does not produce enough diversity Shows a number of qualitative and quantitative results Concerns about the paper: 1.) It is not clear how well the proposed approach works with CNN architectures other than PixelNet 2.) Since the paper used “the pre-trained PixelNet to extract surface normal and edge maps” for ground-truth generation, it is not clear whether the approach will work as well when the input is a ground-truth semantic segmentation map. 3.) Since the paper describes a computer-vision image synthesis system and not a new theoretical result, I believe reporting the actual run-time of the system will make the paper stronger. Can PixelNN run in real-time? How does the timing compare to Isola et al’s Conditional GAN? Minor comments: 1.) The paper mentions making predictions from “incomplete” input several times, but in all experiments, the input is an edge map, normal map, or low-resolution image. When reading the manuscript the first time, I was expecting experiments on images that have regions that are visible and regions that are masked out. However, I am not sure if the confusion is solely mine, or shared with other readers. 2.) Equation 1 contains the norm operator twice, and the first norm has no subscript, while the second one has an l_2 subscript. I would expect the notation style to be consistent within a single equation (i.e., use ||w||_2^2, ||w||^2, or ||w||_{l_2}^2) 3.) Table 1 has two sub-tables: left and right. The sub-tables have the AP column in different places. 4.) “Dense pixel-level correspondences” are discussed but not evaluated.
iclr_2018_SJu63o10b
In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms. Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples. The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD). Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering. Experimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.
This paper presents a scheme for unsupervised metric learning using coherent point drifting (CPD)-- the core idea is to learn a parametric model of CPD that shifts the input points such that the shifted points lead to better clustering in a K-Means setup. Following the work of Myronenko & Song, 2010, this paper uses a linear parametric model for the drift (in CPD) after mapping the input points to a kernel feature space using an RBF kernel. The CPD model is directly used within the KMeans objective -- the drift parameter matrix and the KMeans cluster assignment matrix are jointly learned using block-coordinate descent (BCD). The paper uses some interesting properties of the CPD model to derive an efficient optimization solver for the BCD subproblems. Experiments are provided on UCI datasets and demonstrate some promise. Pros: 1) The idea of using CPD for unsupervised metric learning is quite interesting 2) The exploration into the convexity of the CPD parameter learning -- although straightforward -- is also perhaps interesting. 3) The experiments show some promise. Cons: 1) Lacking motivation/Intuition The main motivation for the approach, as far as I understand, is to learn cluster boundaries for non-linear data -- where K-Means fails. However, it is unclear to me why would one need to use K-Means for non-linear data, why not use kernelized kmeans? The proposed CPD model also is essentially learning a linear transformation of the kernelized feature space. So in contrast to kernelized kmeans, what is the advantage of the proposed framework? I see there is an improvement in performance compared to kernelized kmeans, however, intuitively I do not see how that improvement comes from? Perhaps providing some specific examples/scenarios or graphic illustrations will help appreciate the method. 2) Novelty/Significance I think the novelty of this paper is perhaps marginal. The main idea is to directly use CPD from a prior work in a KMeans setup. There are a few parameters to be estimated in the joint learning objective, for which a block-coordinate descent strategy is proposed. The derivations are perhaps straightforward. As noted above, it is not clear what is the significance of this combination or how does it improve performance. As far as CPD goes, it looks to me that the performance depends heavily on the choice of the Gaussian RBF bandwidth parameter, and it is not clear to me how such a parameter can be selected in a unsupervised setting, when class labels are not available for cross-validation. The paper does not provide any intuitions on this front. 3) Technical details. There are a few important details that I do not quite follow in the paper. a) The CPD is originally designed for the point matching problem, and its parametric form (\Psi) is derived using a different a Tikhonov regularized regression model as described just above (1). The current paper directly uses this parametric form in a KMeans setup and solve the resultant problem jointly for the CPD parameter and the clustering assignment. However, it is not clear to me how the paper could use the optimal parametric form for Tikhonov regression as the optimum for the clustering problem. Ideally, I would think when formulating the joint optimization for the clustering problem, the optimal functional v(x) should also be learned/derived for the clustering problem, or some proof should be provided showing the functionals are the same. Without this, I am not convinced that the proposed formulation indeed learns the optimum drifts and the clusters jointly. b) The subproblem on Y (the assignment matrix) looks like a standard SVD objective. It is not clear why would it be necessary to resort to Ky Fan's theorem for its optimal solution. c) The paper talks about manifold embedding in the abstract and in Sec. 2.2. However, it appears to be a straightforward dimensionality reduction (PCA) of data. If not, what is the precise manifold that is described here? d) Eq. 9, the definition of Y_c is incorrect and unclear. p is defined as a vector of ones, earlier. e) Although the assignment matrix Y has orthogonal columns, it is a binary matrix. If it is approximated by an orthonormal frame, how do you reduce it to a binary matrix? Does taking the largest values in each column suffice -- it does not look like so. However, in the paper, Y is relaxed to an orthonormal frame, which is estimated using PCA, the data points are then projected onto this low-dimensional subspace, and then k-means applied to get the Y matrix. The provided math does not support any of these steps. Thus, the technical exposition is imprecise and the solutions appear rather heuristic. f) The kernelized variant of the proposed scheme, described in Sec. 2.4 is missing important details. How precisely is the kernelization done? How is CPD extended to that setup and what would be the Gaussian kernel G in that case, and what does \Psi signify? g) Figure 2, it seems that kernel kmeans and the proposed CPD-UML show similar cluster boundaries for low-kernel widths. Why are the high kernel widths beneficial? 4) Experiments There is some improvement of the proposed method -- however overall, the improvements are marginal. The discussion is missing any analysis of the results. Why it works at times, how well it improves on kernelized kmeans, and why? What is the advantage over other competitive schemes, etc. In summary, while there is a minor novelty in connecting two separate ideas (CPD and UML) into a joint UML setup, the paper lacks sufficient motivations for proposing this setup (in contrast to say kernelized kmeans), the technical details are unconvincing, and the experiments lack sufficient details or analysis. Thus, I do not think this paper is ready to be accepted in its current form.
iclr_2018_H1DkN7ZCZ
Workshop track -ICLR 2018 DEEP LEARNING MUTATION PREDICTION ENABLES EARLY STAGE LUNG CANCER DETECTION IN LIQUID BIOPSY Somatic cancer mutation detection at ultra-low variant allele frequencies (VAFs) is an unmet challenge that is intractable with current state-of-the-art mutation calling methods. Specifically, the limit of VAF detection is closely related to the depth of coverage due to the requirement of multiple supporting reads in extant methods, precluding the detection of mutations at VAFs that are orders of magnitude lower than the depth of coverage. Nevertheless, the ability to detect cancer-associated mutations in ultra low VAFs is a fundamental requirement for low-tumor burden cancer diagnostics applications such as early detection, monitoring, and therapy nomination using liquid biopsy methods (cell-free DNA). Here we defined a spatial representation of sequencing information adapted for convolutional architecture that enables variant detection in a manner independent of the depth of sequencing. This method enables the detection of cancer mutations even in VAFs as low as 10 4 , more than two orders of magnitude below the current state-of-theart. We validated our method on both simulated plasma and on clinical cfDNA plasma samples from cancer patients and non-cancer controls. This method introduces a new domain within bioinformatics and personalized medicine -somatic whole genome mutation calling for liquid biopsy.
Summary: In this paper the authors offer a new algorithm to detect cancer mutations from sequencing cell free DNA (cfDNA). The idea is that in the sample being sequenced there would also be circulating tumor DNA (ctDNA) so such mutations could be captured in the sequencing reads. The issue is that the ctDNA are expected to be found with low abundance in such samples, and therefore are likely to be hit by few or even single reads. This makes the task of differentiating between sequencing errors and true variants due to ctDNA hard. The authors suggest to overcome this problem by training an algorithm that will identify the sequence context that characterize sequencing errors from true mutations. To this, they add channels based on low base quality, low mapping quality. The algorithm for learning the context of sequencing reads compared to true mutations is based on a multi layered CNN, with 2/3bp long filters to capture di and trinucleotide frequencies, and a fully connected layer to a softmax function at the top. The data is based on mutations in 4 patients with lung cancer for which they have a sample both directly from the tumor and from a healthy region. One more sample is used for testing and an additional cancer control which is not lung cancer is also used to evaluate performance. Pros: The paper tackles what seems to be both an important and challenging problem. We also liked the thoughtful construction of the network and way the reference, the read, the CIGAR and the base quality were all combined as multi channels to make the network learn the discriminative features of from the context. Using matched samples of tumor and normal from the patients is also a nice idea to mimic cfDNA data. Cons: While we liked both the challenge posed and the idea to solve it we found several major issues with the work. First, the writing is far from clear. There are typos and errors all over at an unacceptable level. Many terms are not defined or defined after being introduced (e.g. CIGAR, MF, BQMQ). A more reasonable CS style of organization is to first introduce the methods/model and then the results, but somehow the authors flipped it and started with results first, lacking many definitions and experimental setup to make sense of those. Yet Sec. 2 “Results” p. 3 is not really results but part of the methods. The “pipeline” is never well defined, only implicitly in p.7 top, and then it is hard to relate the various figures/tables to bottom line results (having the labels wrong does not help that). The filters by themselves seem trivial and as such do not offer much novelty. Moreover, the authors filter the “normal” samples using those (p.7 top), which makes the entire exercise a possible circular argument. If the entire point is to classify mutations versus errors it would make sense to combine their read based calls from multiple reads per mutations (if more than a single read for that mutation is available) - but the authors do not discuss/try that. The entire dataset is based on 4 patients. It is not clear what is the source of the other cancer control case. The authors claim the reduced performance show they are learning lung cancer-specific context. What evidence do they have for that? Can they show a context they learned and make sense of it? How does this relate to the original papers they cite to motivate this direction (Alexandrov 2013)? Since we know nothing about all these samples it may very well be that that are learning technical artifacts related to their specific batch of 4 patients. As such, this may have very little relevance for the actual problem of cfDNA. Finally, performance itself did not seem to improve significantly compared to previous methods/simple filters, and the novelty in terms of ML and insights about learning representations seemed limited. Albeit the above caveats, we iterate the paper offers a nice construction for an important problem. We believe the method and paper could potentially be improved and make a good fit for a future bioinformatics focused meeting such as ISMB/RECOMB.
iclr_2018_S16FPMgRZ
Convolution neural networks typically consist of many convolutional layers followed by several fully-connected layers. While convolutional layers map between high-order activation tensors, the fully-connected layers operate on flattened activation vectors. Despite its success, this approach has notable drawbacks. Flattening discards the multi-dimensional structure of the activations, and the fullyconnected layers require a large number of parameters. We present two new techniques to address these problems. First, we introduce tensor contraction layers which can replace the ordinary fully-connected layers in a neural network. Second, we introduce tensor regression layers, which express the output of a neural network as a low-rank multi-linear mapping from a high-order activation tensor to the softmax layer. Both the contraction and regression weights are learned end-to-end by backpropagation. By imposing low rank on both, we use significantly fewer parameters. Experiments on the ImageNet dataset show that applied to the popular VGG and ResNet architectures, our methods significantly reduce the number of parameters in the fully connected layers (about 65% space savings) while negligibly impacting accuracy.
This paper incorporates tensor decomposition and tensor regression into CNN by replacing its flattening operations and fully-connected layers with a new tensor regression layer. Pros: The low-rank representation of tensors is able to reduce the model complexity in the original CNN without sacrificing much prediction accuracy. This is promising as it enables the implementation of complex deep learning algorithms on mobile devices due to its huge space saving performance. Overall, this paper is easy to follow. Cons: Q1: Can the authors discuss the computational time of the proposed tensor regression layers and compare it to that of the baseline CNN? The tensor regression layer is computationally more expensive than the flattening operations in original CNN. Usually, it also involves expensive model selection procedure to choose the tuning parameters (N+1 ranks and a L2 norm sparsity parameter). In the experiments, the authors simply tried a few ranks without serious tuning. Q2: The authors reported the space saving in Table 1 but not in Table 2. Since spacing saving is a major contribution of the proposed method, can authors add the space saving percentage in Table 2? Q3: There are a few typos in the current paper. I would suggest the authors to take a careful proofreading. For example, (1) In the “Related work“ paragraph on page 2, “Lebedev et al. (2014) proposes…” should be “Lebedev et al. (2014) propose…”. Many other references have the same issue. (2) In Figure 1, the letter $X$ should be $\tilde{\cal X}$. (3) In expression (5) on page 3, the core tensor is denoted by $\tilde{\cal G}$. Is this the same as $\tilde{\cal X}^{‘}$ in Figure 1? (4) In expression (5) on page 3, the core tensor $\tilde{\cal G}$ is of dimension $(D_0, R_1, \ldots, R_N)$. However, in expression (8) on page 5, $\tilde{\cal G}$ is of dimension $(R_0, R_1, \ldots, R_N, R_{N+1})$. (5) Use \cite{} and \citep{} correctly. For example, in the “Related work“ paragraph on page 2, “Several prior papers address the power of tensor regression to preserve natural multi-modal structure and learn compact predictive models Guo et al. (2012); Rabusseau & Kadri (2016); Zhou et al. (2013); Yu & Liu (2016).” should be “Several prior papers address the power of tensor regression to preserve natural multi-modal structure and learn compact predictive models (Guo et al., 2012; Rabusseau & Kadri, 2016; Zhou et al., 2013; Yu & Liu, 2016).”
iclr_2018_rJaE2alRW
We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network. The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The proposed architecture achieves promising results as compared to convolutional and recurrent neural networks. The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible.
The author proposed: 1. A data augmentation technique for asynchronous time series data. 2. A convolutional 'Significance' weighting neural network that assigns normalised weights to the outputs of a fully-connected autoregressive 'Offset' neural network, such that the output is a weighted average of the 'Offset' neural net. 3. An 'auxiliary' loss function. The experiments showed that: 1. The proposed method beat VAR/CNN/ResNet/LSTM 2 synthetic asynchronous data sets, 1 real electricity meter data set and 1 real financial bid/ask data set. It's not immediately clear how hyper-parameters for the benchmark models were chosen. 2. The author observed from the experiments that the depth of the offset network has negligible effect, and concluded that the 'Significance' network has crucial impact. (I don't see how this conclusion can be made.) 3. The proposed auxiliary loss is not useful. 4. The proposed architecture is more robust to noise in the synthetic data set compared to the benchmarks, and together with LSTM, are least prone to overfitting. Pros - Proposed a useful way of augmenting asynchronous multivariate time series for fitting autoregressive models - The convolutional Significance/weighting networks appears to reduce test errors (not entirely clear) Cons - The novelties aren't very well-justified. The 'Significance' network was described as critical to the performance, but there is no experimental result to show the sensitivity of the model's performance with respect to the architecture of the 'Significance' network. At the very least, I'd like to see what happens if the weighting was forced to be uniform while keeping the 'Offset' network and loss unchanged. - It's entirely unclear how the train and test data was split. This may be quite important in the case of the financial data set. - It's also unclear if model training was done on a rolling basis, which is common for time series forecasting. - The auxiliary loss function does not appear to be very helpful, but was described as a key component in the paper. Quality: The quality of the paper was okay. More details of the experiments should be included in the main text to help interpret the significance of the experimental results. The experiment also did not really probe the significance of the 'Significance' network even though it's claimed to be important. Clarity: Above average. Originality: Mediocre. Nothing really shines. Weighted average-type architecture has been proposed many times in neural networks (e.g., attention mechanisms). Significance: Low. It's unclear how useful the architecture really is.
iclr_2018_BJluxbWC-
This paper concerns open-world classification, where the classifier not only needs to classify test examples into seen classes that have appeared in training but also reject examples from unseen or novel classes that have not appeared in training. Specifically, this paper focuses on discovering the hidden unseen classes of the rejected examples. Clearly, without prior knowledge this is difficult. However, we do have the data from the seen training classes, which can tell us what kind of similarity/difference is expected for examples from the same class or from different classes. It is reasonable to assume that this knowledge can be transferred to the rejected examples and used to discover the hidden unseen classes in them. This paper aims to solve this problem. It first proposes a joint open classification model with a sub-model for classifying whether a pair of examples belongs to the same or different classes. This sub-model can serve as a distance function for clustering to discover the hidden classes of the rejected examples. Experimental results show that the proposed model is highly promising.
The main goal of this paper is to cluster images from classes unseen during training. This is an interesting extension of the open-world paradigm, where at test time, the classifier has to identify images beloning to the C seen classes during training, but also identify (reject) images which were previously unseen. These rejected images could be clustered to identify the number of unseen classes; either for revealing the underlying structure of the unseen classes, or to reduce annotation costs. In order to do so, an extensive framework is proposed, consisting of 3 ConvNet architectures, followed by a hierarchical clustering approach. The 3 ConvNets all have a different goal: 1. an Open Classification Network (per class sigmoid, trained 1vsRest, with thresholds for rejection) 2. Pairwise Classification Network, (binary sigmoid, trained on pairs of images of same/different classes) 3. Auto encoder network These network are jointly trained, and the joint-loss is simply the addition of a cross-entropy loss (from OCN), the binary cross-entropy loss (from PCN) and a pixel wise loss (from AE). Remarks: - it is unclear if the ConvNet weights of the first layers are shared). - it is unclear how joint training might help, given that the objectives do not influence each other - Eq 1: *label "y_i" has two different semantics (L_ocn it is the class label, while in L_pcn it is the label of an image pair being from the same class or not) * s_j is undefined * relation between the p(y_i = 1) (in PCN) and g(x_p,x_q) in Eq 2 could be made more explicit, PCN depends on two images, according to Eq 1, it seems just a sum over single images. - It is unclear why the Auto Encoder network is added, and what its function is. - It is unclear wether OCN requires/uses unseen class examples during training. - Last paragraph of 3.1 "The 1-vs-rest ... rejected", I don't see why you need 1vsRest classifiers for this, a multi-class (softmax) output can also be thresholded to reject an test image from the known classes and to assign it to the unknown class. Experimental evaluation The experimental evaluation uses 2 datasets, MNIST and EMNIST, both are very specific for character recognition. It is a pity that not also more general image classification has been considered (CIFAR100, ImageNet, Places365, etc), that would provide insights to the more general behaviour of the proposed ideas. My major concern is that the clustering task is not extensively explored. Just a single setting (with a single random sampling of seen/unseen classes) has been evaluated. This is -in part- due to the nature of the chosen datasets, in a 10 class dataset it is difficult to show the influence of the number of unseen classes. So, I'd really urge the authors to extend this evaluation. Will the method discover more classes when 100 unknown classes are used? What kind of clusters are discovered? Are the types of classes in the seen/unseen classes important, I'd expect at least multiple runs of the current experiments on (E)MNIST. Further, I miss some baselines and ablation study. Questions which I'd like to seen answered: how good is the OCN representation when used for clustering compared to the PCN representation? What is the benefit of joint-training? How important is the AE in the loss? Remaining remarks - Just a very simple / non-standard ConvNet architecture is trained. Will a ResNet(32) show similar performance? - In Eq 4, |C_i || y_j| seems a strange notation for union. Conclusion This paper brings in an interesting idea, is it possible to cluster the unseen classes in an open-world classification scenario? A solution using a pairwise convnet followed by hierarchical clustering is proposed. This is a plausible solution, yet in total I miss an exploration of the solution. Both in terms of general visual classification (only MNIST is used, while it would be nice to see results on CIFAR and/or ImageNet as in Bendale&Boult 2016), as in exploration of different scenarios (different number of unseen classes, different samplings) and ablation of the method (independent training, using OCN for hierarchical clustering, influence of Auto Encoder). Therefore, I rate this paper as a (weak) reject: it is just not (yet) good enough for acceptance.
iclr_2018_SyunbfbAb
We introduce FigureQA, a visual reasoning corpus of over one million questionanswer pairs grounded in over 100, 000 images. The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts. We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection. To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure. To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives. In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements. We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as strong baseline. Preliminary results indicate that the task poses a significant machine learning challenge. We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.
Summary: The paper introduces a new visual reasoning dataset called Figure-QA which consists of 140K figure images and 1.55M QA pairs. The images are generated synthetically by plotting perturbed sampled data using a visualization tool. The questions are also generated synthetically using 15 templates. Performance of baseline models and humans show that it is a challenging task and more advanced models are required to solve this task. Strengths: — FigureQA can help in developing models that can extract useful information from visual representations of data. — Since performance on CLEVR dataset is already close to 100%, more challenging visual reasoning datasets would encourage the community to develop more advanced reasoning models. One of such datasets can be FigureQA. — The paper is well written and easy to follow. Weaknesses: — Since the dataset is created synthetically, it is not clear if it is actually visual reasoning which is needed to solve this task, or the models can exploit biases (not necessarily language biases) to perform well on this dataset. In short, how do we know if the models trained on this dataset are actually learning something useful? One way to ensure this would be to show that models trained on this dataset can perform well on some other task. The first thing to try to show the usefulness of FigureQA is to show that the models trained on FigureQA dataset perform well on a real (figure, QA) dataset. — The only advantages mentioned in the paper of using a synthetic dataset for this task are having greater control over task’s complexity and enabling auxiliary supervision signals, but none of them are shown in this paper, so it’s not clear if they are needed or useful. — The paper should discuss what type of abilities are required in the models to perform well on this task, and how these abilities are currently not studied in the research community. Or in short, what new challenges are being introduced by FigureQA and how should researchers go about solving them on a high level? — With what goal were these 15 types of questions chosen? Are these the most useful questions analysts want to extract out of plots? I am especially concerned about finding the roughest/smoothest and low/high median. Even humans are relatively bad at these tasks. Why do we expect models to do well on them? — Why only binary questions? It is probably more difficult for analysts to ask a binary question than to ask non-binary ones such as “What is the highest in this plot?”
— Why these 5 types of plots? Can the authors justify that these 5 types of plots are the most frequent ones dealt by analysts? — Are the model accuracies in Table 3 on the same subset as humans or on the complete test set? Can the authors please report both separately? Overall: The proposed dataset seems reasonable but neither the dataset seems properly motivated (something where analysts actually struggle and models can help) nor it is clear if it will actually be useful for the research community (models performing well on this dataset will need to focus on specific abilities which have not been studied in the research community).
iclr_2018_ByuP8yZRb
Adversarial feature learning (AFL) is one of the promising ways for explicitly constrains neural networks to learn desired representations; for example, AFL could help to learn anonymized representations so as to avoid privacy issues. AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary. This paper proposes a novel design of the adversary, multiple adversaries over random subspaces (MARS) that instantiate the concept of the volunerableness. The proposed method is motivated by an assumption that deceiving an adversary could fail to give meaningful information if the adversary is easily fooled, and adversary rely on single classifier suffer from this issues. In contrast, the proposed method is designed to be less vulnerable, by utilizing the ensemble of independent classifiers where each classifier tries to predict sensitive variables from a different subset of the representations. The empirical validations on three user-anonymization tasks show that our proposed method achieves state-of-the-art performances in all three datasets without significantly harming the utility of data. This is significant because it gives new implications about designing the adversary, which is important to improve the performance of AFL.
The below review addresses the first revision of the paper. The revised version does address my concerns. The fact that the paper does not come with substantial theoretical contributions/justification still stands out. --- The authors present a variant of the adversarial feature learning (AFL) approach by Edwards & Storkey. AFL aims to find a data representation that allows to construct a predictive model for target variable Y, and at the same time prevents to build a predictor for sensitive variable S. The key idea is to solve a minimax problem where the log-likelihood of a model predicting Y is maximized, and the log-likelihood of an adversarial model predicting S is minimized. The authors suggest the use of multiple adversarial models, which can be interpreted as using an ensemble model instead of a single model. The way the log-likelihoods of the multiple adversarial models are aggregated does not yield a probability distribution as stated in Eq. 2. While there is no requirement to have a distribution here - a simple loss term is sufficient - the scale of this term differs compared to calibrated log-likelihoods coming from a single adversary. Hence, lambda in Eq. 3 may need to be chosen differently depending on the adversarial model. Without tuning lambda for each method, the empirical experiments seem unfair. This may also explain why, for example, the baseline method with one adversary effectively fails for Opp-L. A better comparison would be to plot the performance of the predictor of S against the performance of Y for varying lambdas. The area under this curve allows much better to compare the various methods. There are little theoretical contributions. Basically, instead of a single adversarial model - e.g., a single-layer NN or a multi-layer NN - the authors propose to train multiple adversarial models on different views of the data. An alternative interpretation is to use an ensemble learner where each learner is trained on a different (overlapping) feature set. Though, there is no theoretical justification why ensemble learning is expected to better trade-off model capacity and robustness against an adversary. Tuning the architecture of the single multi-layer NN adversary might be as good? In short, in the current experiments, the trade-off of the predictive performance and the effectiveness of obtaining anonymized representations effectively differs between the compared methods. This renders the comparison unfair. Given that there is also no theoretical argument why an ensemble approach is expected to perform better, I recommend to reject the paper.
iclr_2018_SJx9GQb0-
Published as a conference paper at ICLR 2018 IMPROVING THE IMPROVED TRAINING OF WASSERSTEIN GANS: A CONSISTENCY TERM AND ITS DUAL EFFECT Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by Arjovsky & Bottou (2017), who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the 1-Lipschitz continuity of the discriminator. In this paper, we propose a novel approach to enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning methods. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000 labeled images, to the best of our knowledge.
Summary: The paper proposes a new regularizer for wgans, to be combined with the traditional gradient penalty. The theoretical motivation is bleak, and the analysis contains some important mistakes. The results are very good, as noticed by the comments, the fact that the method is also less susceptible to overfitting is also an important result, though this might be purely due to dropout. One of the main problems is that the largest dataset used is CIFAR, which is small. Experiments on something like bedrooms or imagenet would make the paper much stronger. If the authors fix the theoretical analysis and add evidence in a larger dataset I will raise the score. Detailed comments: - The motivation of 1.2 and the sentence "Arguably, it is fairly safe to limit our scope to the manifold that supports the real data distribution P_r and its surrounding regions" are incredibly wrong. First of all, it should be noted that the duality uses 1-Lip in the entire space between Pr and Pg, not in Pr alone. If the manifolds are not extremely close (such as in the beginning of training), then the discriminator can be almost exactly 1 in the real data, and 0 on the fake. Thus the discriminator would be almost exactly constant (0-Lip) near the real manifold, but will fail to be 1-lip in the decision boundary, this is where interpolations fix this issue. See Figure 2 of the wgan paper for example, in this simple example an almost perfect discriminator would have almost 0 penalty. - In the 'Potential caveats' section, the implication that 1-Lip may not be enforced in non-examined samples is checkable by an easy experiment, which is to look for samples that have gradients of the critic wrt the input with norm > 1. I performed the exp in figure 8 and saw that by taking a slightly higher lambda, one reaches gradients that are as close to 1 as with ct-gan. Since ct-gan uses an extra regularizer, I think the authors need some stronger evidence to support the claim that ct-gan better battles this 'potential caveat'. - It's important to realize that the CT regularizer with M' = 1 (1-Lip constraint) will only be positive for an almost 1-Lip function if x and x' are sampled when x - x' has a very similar direction than the gradient at x. This is very hard in high dimensional spaces, and when I implemented a CT regularizer indeed the ration of eq (4) was quite less than the norm of the gradient. It would be useful to plot the value of the CT regularizer (the eq 4 version) as the training iterations progresses. Thus the CT regularizer works as an overall Lipschitz penalty, as opposed to penalizing having more than 1 for the Lipschitz constant. This difference is non-trivial and should be discussed. - Line 11 of the algorithm is missing L^(i) inside the sum. - One shouldn't use MNIST for anything else than deliberately testing an overfitting problem. Figure 4 is thus relevant, but the semi-supervised results of MNIST or the sample quality experiments give hardly any evidence to support the method. - The overfitting result is very important, but one should disambiguate this from being due to dropout. Comparing with wgangp + dropout is thus important in this experiment. - The authors should provide experiments in at least one larger dataset like bedrooms or imagenet (not faces, which is known to be very easy). This would strengthen the paper quite a bit.
iclr_2018_H15RufWAW
We propose GraphGAN -the first implicit generative model for graphs that enables to mimic real-world networks. We pose the problem of graph generation as learning the distribution of biased random walks over a single input graph. Our model is based on a stochastic neural network that generates discrete output samples, and is trained using the Wasserstein GAN objective. GraphGAN enables us to generate sibling graphs, which have similar properties yet are not exact replicas of the original graph. Moreover, GraphGAN learns a semantic mapping from the latent input space to the generated graph's properties. We discover that sampling from certain regions of the latent space leads to varying properties of the output graphs, with smooth transitions between them. Strong generalization properties of GraphGAN are highlighted by its competitive performance in link prediction as well as promising results on node classification, even though not specifically trained for these tasks.
This paper proposes a WGAN formulation for generating graphs based on random walks. The proposed generator model combines node embeddings, with an LSTM architecture for modeling the sequence of nodes visited in a random walk; the discriminator distinguishes real from fake walks. The model is learned from a single large input graph (for three real-world networks) and evaluated against one baseline generative graph model: degree-corrected stochastic block models. The primary claims of the paper are as follows: i) The proposed approach is a generative model of graphs, specifically producing "sibling" graphs ii) The learned latent representation provides an interpretation of generated graph properties iii) The model generalizes well in terms of link and node classification The proposed method is novel and the incorporated ideas are quite interesting (e.g., discriminating real from fake random walks, generating random walks from node embeddings and LSTMs). However, from a graph generation perspective, the problem formulation and evaluation do not sufficiently demonstrate the utility of proposed method. First, wrt claim (i) the problem of generating "sibling" graphs is ill-posed. Statistical graph models are typically designed to generate a probability distribution over all graphs with N nodes and, as such, are evaluated based on how well they model that distribution. The notion of a "sibling" graph used in this paper is not clearly defined, but it seems to only be useful if the sibling graphs are likely under the distribution. Unfortunately, the likelihood of the sampled graphs is not explicitly evaluated. On the other hand, since many of the edges are shared the "siblings" may be nearly isomorphic to the input graph, which is not useful from a graph modeling perspective. For claim (i), the comparison to related work is far from sufficient to demonstrate its utility as a graph generation model. There are many graph models that are superior to DC-SBM, including KPGMs, BETR, ERGMs, hierarchical random graph models and latent space models. Moreover, a very simple baseline to assess the LSTM component of the model, would be to produce a graph by sampling links repeatedly from the latent space of node embeddings. Next, the evaluation wrt to claim (ii) is novel and may help developers understand the model characteristics. However, since the properties are measured based on a set of random walks it is still difficult to interpret the impact on the generated graphs (since an arbitrary node in the final graph will have some structure determined from each of the regions). Do the various regions generate different parts of the final graph structure (i.e., focusing on only a subset of the nodes)? Lastly, the authors evaluate the learned model on link and node prediction tasks and state that the model's so-so performance supports the claim that the model can generalize. This is the weakest claim of the paper. The learned node embeddings appear to do significantly worse than node2vec, and the full model is worse than DC-SBM. Given that the proposed model is transductive (when there is significant edge overlap) it should do far better than DC-SBM which is inductive. Overall, while the paper includes a wide range of experimental evaluation, they are aimed too broadly (and the results are too weak) to support any specific claim of the work. If the goal is to generate transductively (with many similar edges), then it would be better to compare more extensively to alternative node embedding and matrix factorization approaches, and assess the utility of the various modeling choices (e.g., LSTM, in/out embedding). If the goal is to generate inductively, over the full distribution of graphs, then it would be better to (i) assess whether the sampled graphs are isomorphic, and (ii) compare more extensively to alternative graph models (many of which have been published since 2010).
iclr_2018_ryTp3f-0-
Published as a conference paper at ICLR 2018 REINFORCEMENT LEARNING ON WEB INTERFACES USING WORKFLOW-GUIDED EXPLORATION Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates. This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a single mistake can ruin the entire sequence of actions. A common remedy is to "warmstart" the agent by pre-training it to mimic expert demonstrations, but this is prone to overfitting. Instead, we propose to constrain exploration using demonstrations. From each demonstration, we induce high-level "workflows" which constrain the allowable actions at each time step to be similar to those in the demonstration (e.g., " Step 1: click on a textbox; Step 2: enter some text"). Our exploration policy then learns to identify successful workflows and samples actions that satisfy these workflows. Workflows prune out bad exploration directions and accelerate the agent's ability to discover rewards. We use our approach to train a novel neural policy designed to handle the semi-structured nature of websites, and evaluate on a suite of web tasks, including the recent World of Bits benchmark. We achieve new state-of-the-art results, and show that workflow-guided exploration improves sample efficiency over behavioral cloning by more than 100x.
Summary: The authors propose a method to make exploration in really sparse reward tasks more efficient. They propose a method called Workflow Guided Exploration (WGE) which is learnt from demonstrations but is environment agnostic. Episodes are generated by first turning demonstrations to a workflow lattice. This lattice encodes actions which are in some sense similar to those in the demonstration. By rolling out episodes which are randomly sampled from this set of similar actions for each encountered state, it is claimed that other methods like Behavor Cloning + RL (BC-then-RL) can be outperformed in terms of number of sample complexity since high reward episodes can be sampled with much higher probability. A novel NN architecture (DOMNet) is also presented which can embed structured documents like HTML webpages. Comments: - The paper is well-written and relevant literature is cited and discussed. - My main concern is that while imitation learning and inverse reinforcement learning are mentioned and discussed in related work section as classes of algorithms for incorporating prior information there is no baseline experiment using either of these methods. Note that the work of Ross and Bagnell, 2010, 2011 (cited in the paper) establish theoretically that Behavior Cloning does not work in such situations due to the non-iid data generation process in such sequential decision-making settings (the mistakes grow quadratically in the length of the horizon). Their proposed algorithm DAgger fixes this (the mistakes by the policy are linear in the horizon length) by using an iterative procedure where the learnt policy from the previous iteration is executed and expert demonstrations on the visited states are recorded, the new data thus generated is added to the previous data and a new policy retrained. Dagger and related methods like Aggrevate provide sample-efficient ways of exploring the environment near where the initial demonstrations were given. WGE is aiming to do the same: explore near demonstration states. - The problem with putting in the replay buffer only episodes which yield high reward is that extrapolation will inevitably lead the learnt policy towards parts of the state space where there is actually low reward but since no support is present the policy makes such mistakes. - Therefore would be good to have Dagger or a similar imitation learning algorithm be used as a baseline in the experiments. - Similar concerns with IRL methods not being used as baselines. Update: Review score updated after discussion with authors below.
iclr_2018_ByuI-mW0W
We consider the question of how to assess generative adversarial networks, in particular with respect to whether or not they generalise beyond memorising the training data. We propose a simple procedure for assessing generative adversarial network performance based on a principled consideration of what the actual goal of generalisation is. Our approach involves using a test set to estimate the Wasserstein distance between the generative distribution produced by our procedure, and the underlying data distribution. We use this procedure to assess the performance of several modern generative adversarial network architectures. We find that this procedure is sensitive to the choice of ground metric on the underlying data space, and suggest a choice of ground metric that substantially improves performance. We finally suggest that attending to the ground metric used in Wasserstein generative adversarial network training may be fruitful, and provide a concrete formulation for doing so.
`The papers aims to provide a quality measure/test for GANs. The objective is ambitious an deserve attention. As GANs are minimizing some f-divergence measure, the papers remarks that computing a Wasserstein distance between two distributions made of a sum of Diracs is not a degenerate case and is tractable. So they propose evaluate the current approximation of a distribution learnt by a GAN by using this distance as a baseline performance (in terms of W distance and computed on a hold out dataset). A first remark is that the papers does not clearly develop the interest of puting things a trying to reach a treshold of performance in W distance rather than just trying to minimize the desired f-divergence. More specifically as they assess the performance in terms of W distance I would would be tempted to just minimize the given criterion. This would be very interesting to have arguments on why being better than the "Dirac estimation" in terms of W2 distance would lead to better performance for others tasks (as other f-divergences or image generation). According to the authors the core claims are: "1/ We suggest a formalisation of the goal of GAN training (/generative modelling more broadly) in terms of divergence minimisation. This leads to a natural, testable notion of generalisation. " Formalization in terms of divergence minimization is not new (see O. Bousquet & all https://arxiv.org/pdf/1701.02386.pdf ) and I do not feel like this paper actually performs any "test" (in a statistical sense). In my opinion the contribution is more about exhibiting a baseline which has to be defeated for any algorithm interesting is learning the distribution in terms of W2 distance. "2/ We use this test to evaluate the success of GAN algorithms empirically, with the Wasserstein distance as our divergence." Here the distance does not seems so good because the performance in generation does not seems to only be related to W2 distance. Nevertheless, there is interesting observations in the paper about the sensitivity of this metric to the bluring of pictures. I would enjoyed more digging in this direction. The authors proposes to solve this issue by relying to an embedded space where the L2 distance makes more sense for pictures (DenseNet). This is of course very reasonable but I would expect anyone working on distribution over picture to work with such embeddings. Here I'm not sure if this papers opens a new way to improve the embedding making use on non labelled data. One could think about allowing the weights of the embeddings to vary while f-divergence is minimized but this is not done in the submitted work. "3/ We find that whether our proposed test matches our intuitive sense of GAN quality depends heavily on the ground metric used for the Wasserstein distance." This claim is highly biased by who is giving the "intuitive sense". It would be much better evaluated thought a mechanical turk test. "4/ We discuss how to use these insights to improve the design of WGANs more generally." As our understanding of the GANs dynamics are very coarse, I feel this is not a good thing to claim that "doing xxx should improve things" without actually trying it.
iclr_2018_ryk77mbRZ
Recurrent neural networks (RNNs) are powerful models for sequential data. They can approximate arbitrary computations, and have been used successfully in domains such as text and speech. However, the flexibility of RNNs makes them susceptible to overfitting and regularization is important. We develop a noise-based regularization method for RNNs. The idea is simple and easy to implement: we inject noise in the hidden units of the RNN and then maximize the original RNN's likelihood averaged over the injected noise. On a language modeling benchmark, our method achieves better performance than its deterministic RNN and variational dropout (Gal and Ghahramani, 2016).
The RNN transition function is: h_t+1 = f(h_t,x_t) This paper proposes using a stochastic transition function instead of a deterministic one. i.e h_{t+1} \sim expfam(mean = f(h_t,x_t), gamma) where expfam denotes a distribution from the exponential family. The experimental results consider text modeling (evaluating on perplexity) on Penn Treebank and Wikitext-2. The method of regularization is compared to a reimplementation of Variational Dropout and no regularization. The work is written clearly and easy to follow. Overall, the core idea in this work is interesting but underexplored. * As of when I read this paper, all results on this work used 200 hidden units realizing results that were well off from the state of the art results on Penn Tree Bank (as pointed out by the external reader). The authors responded by stating that this was done to achieve a relative comparison. A more interesting comparison, in addition to the ones presented, would be to see how well each method performs while not controlling for hidden layer size. Then, it might be that restricting the number of hidden dimensions is required for the RNN without any regularization but for both Variational Dropout and Noisin, one obtains better results with a larger the hidden dimension. * The current experimental setup makes it difficult to assess when the proposed regularization is useful. Table 2 suggests the answer is sometimes and Table 3 suggests its marginally useful when the RNN size is restricted. * How does the proposed method's peformance compare to Zoneout https://arxiv.org/pdf/1606.01305.pdf? * Clarifying the role of variational inference: I could be missing something but I don't see a good reason why the prior (even if learned) should be close to the true posterior under the model. I fear the bound in Section (3) [please include equation numbers in the paper] could be quite loose. * What is the rationale for not comparing to the model proposed in [Chung et. al] where there is a stochastic and deterministic component to the transition function? In what situations do we expect the fully stochastic transition here to work better than a model that has both? Presumably, some aspect of the latent variable + RNN model could be expressed by having a small variance for a subset of the dimensions and large one for the others but since gamma is the same across all dimensions of the model, I'm not sure this feature can be incorporated into the current approach. Such a comparison would also empirically verify what happens when learning with the prior versus doing inference with an approximate posterior helps. * The regularization is motivated from the point of view of sampling the hidden states to be from the exponential family, but all the experiments provided seem to use a Gaussian distribution. This paper would be strengthened by a discussion and experimentation with other kinds of distributions in the exponential family.
iclr_2018_B1QgVti6Z
EMPIRICAL RISK LANDSCAPE ANALYSIS FOR UNDER- STANDING DEEP NEURAL NETWORKS This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network consisting of d i neurons in the i-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep nonlinear neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself. To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width d i , the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape.
Overall, this work seems like a reasonable attempt to answer the question of how the empirical loss landscape relates to the true population loss landscape. The analysis answers: 1) When empirical gradients are close to true gradients 2) When empirical isolated saddle points are close to true isolated saddle points 3) When the empirical risk is close to the true risk. The answers are all of the form that if the number of training examples exceeds a quantity that grows with the number of layers, width and the exponential of the norm of the weights with respect to depth, then empirical quantities will be close to true quantities. I have not verified the proofs in this paper (given short notice to review) but the scaling laws in the upper bounds found seem reasonably correct. Another reviewer's worry about why depth plays a role in the convergence of empirical to true values in deep linear networks is a reasonable worry, but I suspect that depth will necessarily play a role even in deep linear nets because the backpropagation of gradients in linear nets can still lead to exponential propagation of errors between empirical and true quantities due to finite training data. Moreover the loss surface of deep linear networks depends on depth even though the expressive capacity does not. An analysis of dynamics on this loss surface was presented in Saxe et. al. ICLR 2014 which could be cited to address that reviewer's concern. However, the reviewer's suggestion that the results be compared to what is known more exactly for simple linear regression is a nice one. Overall, I believe this paper is a nice contribution to the deep learning theory literature. However, it would even better to help the reader with more intuitive statements about the implications of their results for practice, and the gap between their upper bounds and practice, especially given the intense interest in the generalization error problem. Because their upper bounds look similar to those based on Rademacher complexity or VC dimension (although they claim theirs are a little tighter) - they should put numbers in to their upper bounds taken from trained neural networks, and see what the numerical evaluation of their upper bounds turn out to be in situations of practical interest where deep networks show good generalization performance despite having significantly less training data than number of parameters. I suspect their upper bounds will be loose, but still - it would be an excellent contribution to the literature to quantitatively compare theory and practice with bounds that are claimed to be slightly tigher than previous bounds. Even if they are loose - identifying the degree of looseness could inspire interesting future work.
iclr_2018_H1cKvl-Rb
We show how an ensemble of Q * -functions can be leveraged for more effective exploration in deep reinforcement learning. We build on well established algorithms from the bandit setting, and adapt them to the Q-learning setting. We propose an exploration strategy based on upper-confidence bounds (UCB). Our experiments show significant gains on the Atari benchmark.
This paper paper uses an ensemble of networks to represent the uncertainty in deep reinforcement learning. The algorithm then chooses optimistically over the distribution induced by the ensemble. This leads to improved learning / exploration, notably better than the similar approach bootstrapped DQN. There are several things to like about this paper: - It is a clear paper, with a simple message and experiments that back up the claims. - The proposed algorithm is simple and could be practical in a lot of settings and even non-DQN variants. - It is interesting that Bootstrapped DQN gets such poor performance, this suggests that it is very important in the original paper https://arxiv.org/abs/1602.04621 that "ensemble voting" is applied to the test evaluation... (why do you think this is by the way, do you think it has something to do with the data being *more* off-policy / diverse under a TS vs UCB scheme?) On the other hand: - The novelty/scope of this work is somewhat limited... this is more likely (valuable) incremental work than a game-changer. - Something feels wrong/hacky/incomplete about just doing "ensemble" for uncertainty without bootstrapping/randomization... if we had access to more powerful optimization techniques then this certainly wouldn't be sensible - I think that you should mention that you are heavily reliant on "random initialization + SGD/Adam + specific network architecture" to maintain this idea of uncertainty. For example, this wouldn't work for linear value functions! - I think the original bootstrapped DQN used "ensemble voting" at test time, so maybe you should change the labels or the way this is introduced/discussed. It's definitely very interesting that *essentially* the learning benefit is coming from ensembling (rather than "raw" bootstrapped DQN) and UCB still looks like it does better. - I'm not convinced that page 4 and the "Bayesian" derivation really add too much value to this paper... alternatively, maybe you could introduce the actual algorithm first (train K models in parallel) and then say "this is similar to particle filter" and add the mathematical derivation after, rather than as if it was some complex formula derived. If you want to reference some justification/theory for ensemble-based uncertainty approximation you might consider https://arxiv.org/pdf/1705.07347.pdf instead. - I think this paper might miss the point of the "bigger" problem of efficient exploration in RL... or even how to get "deep" exploration with deep RL. Yes this algorithm sees improvements across Atari, but it's not clear why/if this is a step change versus simply increasing the amount of replay or tuning the learning rate. (Actually I do believe this algorithm can demonstrate deep exploration... but it looks like we're not seeing the big improvements on the "sub-human" games you might hope.) Overall I do think this is a pretty good short paper/evaluation of UCB-ensembles on Atari. The scope/insight of the paper isn't groundbreaking, but I think it delivers a clear short message on the Atari benchmark. Perhaps this will encourage people to dig deeper into some of these issues... I vote accept.
iclr_2018_ByeqORgAW
Published as a conference paper at ICLR 2018 PROXIMAL BACKPROPAGATION We propose proximal backpropagation (ProxProp) as a novel algorithm that takes implicit instead of explicit gradient steps to update the network parameters during neural network training. Our algorithm is motivated by the step size limitation of explicit gradient descent, which poses an impediment for optimization. ProxProp is developed from a general point of view on the backpropagation algorithm, currently the most common technique to train neural networks via stochastic gradient descent and variants thereof. Specifically, we show that backpropagation of a prediction error is equivalent to sequential gradient descent steps on a quadratic penalty energy, which comprises the network activations as variables of the optimization. We further analyze theoretical properties of ProxProp and in particular prove that the algorithm yields a descent direction in parameter space and can therefore be combined with a wide variety of convergent algorithms. Finally, we devise an efficient numerical implementation that integrates well with popular deep learning frameworks. We conclude by demonstrating promising numerical results and show that ProxProp can be effectively combined with common first order optimizers such as Adam.
The paper uses a lesser-known interpretation of the gradient step of a composite function (i.e., via reverse mode automatic differentiation or backpropagation), and then replaces one of the steps with a proximal step. The proximal step requries the solution of a positive-definite linear system, so it is approximated using a few iterations of CG. The paper provides theory to show that their proximal variant (even with the CG approximations) can lead to convergent algorithms (and since practical algorithms are not necessarily globally convergent, most of the theory shows that the proximal variant has similar guarantees to a standard gradient step). On reading the abstract and knowing quite a bit about proximal methods, I was initially skeptical, but I think the authors have done a good job of making their case. It is a well-written, very clear paper, and it has a good understanding of the literature, and does not overstate the results. The experiments are serious, and done using standard state-of-the-art tools and architectures. Overall, it is an interesting idea, and due to the current focus on neural nets, it is of interest even though it is not yet providing substantial improvements. The main drawback of this paper is that there is no theory to suggest the ProxProp algorithm has better worst-case convergence guarantees, and that the experiments do not show a consistent benefit (in terms of time) of the method. On the one hand, I somewhat agree with the authors that "while the running time is higher... we expect that it can be improved through further engineering efforts", but on the other hand, the idea of nested algorithms ("matrix-free" or "truncated Newton") always has this issue. A very similar type of ideas comes up in constrained or proximal quasi-Newton methods, and I have seen many papers (or paper submissions) on this style of method (e.g., see the 2017 SIAM Review paper on FWI by Metivier et al. at https://doi.org/10.1137/16M1093239). In every case, the answer seems to be that it can work on *some problems* and for a few well chosen parameters, so I don't yet buy that ProxProp is going to make a huge savings on a wide-range of problems. In brief: quality is high, clarity is high, originality is high, and significance is medium. Pros: interesting idea, relevant theory provided, high-quality experiments Cons: no evidence that this is a "break-through" idea Minor comments: - Theorems seemed reasonable and I have no reason to doubt their accuracy - No typos at all, which I find very unusual. Nice job! - In Algo 1, it would help to be more explicit about the updates (a), (b), (c), e.g., for (a), give a reference to eq (8), and for (b), reference equations (9,10). It's nice to have it very clear, since "gradient step" doesn't make it clear what the stepsize is, and if this is done in a "Jacob-like" or "Gauss-Seidel-like" fashion. (c) has no reference equation, does it? - Similarly, for Algo 2, add references. In particular, tie in the stepsizes tau and tau_theta here. - Motivation in section 4.1 was a bit iffy. A larger stepsize is not always better, and smaller is not worse. Minimizing a quadratic f(x) = .5||x||^2 will converge in one step with a step-size of 1 because this is well-conditioned; on the flip side, slow convergence comes from lack of strong convexity, or with strong convexity, ill-conditioning of the Hessian (like a stiff ODE). - The form of equation (6) was very nice, and you could also point out the connection with backward Euler for finite-difference methods. This was the initial setting of analysis for most of original results that rely on the proximal operator (e.g., Lions and Mercier 1970s). - Eq (9), this is done component-wise, i.e., Hadamard product, right? - About eq (12), even if softmax cross-entropy doesn't have a closed-form prox (and check the tables of Combettes and Pesquet), because it is separable (if I understand correctly) then it ought to be amenable to solving with a handful of Newton iterations which would be quite cheap. Prox tables (see also the new edition of Bauschke and Combettes' book): P. L. Combettes and J.-C. Pesquet, "Proximal splitting methods in signal processing," in: Fixed-Point Algorithms for Inverse Problems in Science and Engineering (2011) http://www4.ncsu.edu/~pcombet/prox.pdf - Below prop 4, discussing why not to make step (b) proximal, this was a bit vague to me. It would be nice to expand this. - Page 6 near the top, to apply the operator, in the fully-connected case, this is just a matrix multiply, right? and in a conv net, just a convolution? It would help the reader to be more explicit here. - Section 5.1, 2nd paragraph, did you swap tau_theta and tau, or am I just confused? The wording here was confusing. - Fig 2 was not that convincing since the figure with time showed that either usual BackProp or the exact ProxProp were best, so why care about the approximate ProxProp with a few CG iterations? The argument of better generalization is based on very limited experiments and without any explanation, so I find that a weak argument (and it just seems weird that inexact CG gives better generalization). The right figure would be nice to see with time on the x-axis as well. - Section 5.2, this was nice and contributed to my favorable opinion about the work. However, any kind of standard convergence theory for usual SGD requires the stepsize to change per iteration and decrease toward zero. I've heard of heuristics saying that a fixed stepsize is best and then you just make sure to stop the algorithm a bit early before it diverges or behaves wildly -- is that true here? - Final section of 5.3, about the validation accuracy, and the accuracy on the test set after 50 epochs. I am confused why these are different numbers. Is it just because 50 epochs wasn't enough to reach convergence, while 300 seconds was? And why limit to 50 epochs then? Basically, what's the difference between the bottom two plots in Fig 3 (other than scaling the x-axis by time/epoch), and why does ProxProp achieve better accuracy only in the right figure?
iclr_2018_BJB7fkWR-
Many deep reinforcement learning approaches use graphical state representations, this means visually distinct games that share the same underlying structure cannot effectively share knowledge. This paper outlines a new approach for learning underlying game state embeddings irrespective of the visual rendering of the game state. We utilise approaches from multi-task learning and domain adaption in order to place visually distinct game states on a shared embedding manifold. We present our results in the context of deep reinforcement learning agents.
In this paper, the authors propose a new approach for learning underlying structure of visually distinct games. The proposed approach combines convolutional layers for processing input images, Asynchronous Advantage Actor Critic for deep reinforcement learning task and adversarial approach to force the embedding representation to be independent of the visual representation of games. The network architecture is suitably described and seems reasonable to learn simultaneously similar games, which are visually distinct. However, the authors do not explain how this architecture can be used to do the domain adaptation. Indeed, if some games have been learnt by the proposed algorithm, the authors do not precise what modules have to be retrained to learn a new game. This is a critical issue, because the experiments show that there is no gain in terms of performance to learn a shared embedding manifold (see DA-DRL versus baseline in figure 5). If there is a gain to learn a shared embedding manifold, which is plausible, this gain should be evaluated between a baseline, that learns separately the games, and an algorithm, that learns incrementally the games. Moreover, in the experimental setting, the games are not similar but simply the same. My opinion is that this paper is not ready for publication. The interesting issues are referred to future works.
iclr_2018_Skdvd2xAZ
A SCALABLE LAPLACE APPROXIMATION FOR NEURAL NETWORKS We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.
This paper uses recent progress in the understanding and approximation of curvature matrices in neural networks to revisit a venerable area- that of Laplace approximations to neural network posteriors. The Laplace method requires two stages - 1) obtaining a point estimate of the parameters followed by 2) estimation of the curvature. Since 1) is close to common practice it raises the appealing possibility of adding 2) after the fact, although the prior may be difficult to interpret in this case. A pitfall is that the method needs the point estimate to fall in a locally quadratic bowl or to add regularisation to make this true. The necessary amount of regularisation can be large as reported in section 5.4. The paper is generally well written. In particular the mathematical exposition attains good clarity. Much of the mathematical treatment of the curvature was already discussed by Martens and Grosse and Botev et al in previous works. The paper is generally well referenced. Given the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.There are also some experiments not there that would improve the contribution. Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation (It is not sufficient to point to experiments in Hernandez-Lobato and Adams 2015 with a different model/prior). The size of model and data would not be prohibitive for either of these methods in this instance. All that figure 1 shows at the moment is that the proposed approximation has smaller predictive variance than the fully diagonal variant of the method. It would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions. It is good that the paper includes a modern architecture with a more challenging dataset. It is a shame the method does not work better in this instance but the authors should not be penalized for reporting this. I think a paper on a probabilistic method should at some point evaluate log likelihood in a case where the test distribution is the same as the training distribution. This complements experiments where there is dataset shift and we wish to show robustness. I would be very interested to know how useful the implied marginal likelihoods of the approximation where, as suggested for further work in the conclusion.
iclr_2018_rywDjg-RW
NEURAL-GUIDED DEDUCTIVE SEARCH FOR REAL- TIME PROGRAM SYNTHESIS FROM EXAMPLES Synthesizing user-intended programs from a small number of input-output examples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively handengineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems.
The paper presents a branch-and-bound approach to learn good programs (consistent with data, expected to generalise well), where an LSTM is used to predict which branches in the search tree should lead to good programs (at the leaves of the search tree). The LSTM learns from inputs of program spec + candidate branch (given by a grammar production rule) and ouputs of quality scores for programms. The issue of how greedy to be in this search is addressed. In the authors' set up we simply assume we are given a 'ranking function' h as an input (which we treat as black-box). In practice this will simply be a guess (perhaps a good educated one) on which programs will perform correctly on future data. As the authors indicate, a more ambitious paper would consider learning h, rather than assuming it as a given. The paper has a number of positive features. It is clearly written (without typo or grammatical problems). The empirical evaluation against PROSE is properly done and shows the presented method working as hoped. This was a competent approach to an interesting (real) problem. However, the 'deep learning' aspect of the paper is not prominent: an LSTM is used as a plug-in and that is about it. Also, although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic. The authors do not explain what "without attention" means. I think the authors should mention the existence of (logic) program synthesis using inductive logic programming. There are also (closely related) methods developed by the LOPSTR (logic-based program synthesis and transformation) community. Many of the ideas here are reminiscent of methods existing in those communities (e.g. top-down search with heuristics). The use of a grammar to define the space of programs is similar to the "DLAB" formalism developed by researchers at KU Leuven. ADDED AFTER REVISIONS/DISCUSSIONS The revised paper has a number of improvements which had led me to give it slightly higher rating.
iclr_2018_HJCXZQbAZ
HIERARCHICAL DENSITY ORDER EMBEDDINGS By representing words with probability densities rather than point vectors, probabilistic word embeddings can capture rich and interpretable semantic information and uncertainty (Vilnis & McCallum, 2014;Athiwaratkun & Wilson, 2017). The uncertainty information can be particularly meaningful in capturing entailment relationships -whereby general words such as "entity" correspond to broad distributions that encompass more specific words such as "animal" or "instrument". We introduce density order embeddings, which learn hierarchical representations through encapsulation of probability distributions. In particular, we propose simple yet effective loss functions and distance metrics, as well as graph-based schemes to select negative samples to better learn hierarchical probabilistic representations. Our approach provides state-of-the-art performance on the WORDNET hypernym relationship prediction task and the challenging HYPERLEX lexical entailment dataset -while retaining a rich and interpretable probabilistic representation.
The paper introduces a novel method for modeling hierarchical data. The work builds on previous approaches, such as Vilnis and McCallum's Word2Gauss and Vendrov's Order Embeddings, to establish a partial order over probability densities via encapsulation, which allows it to model hierarchical information. The aim is to learn embeddings from supervised structured data, such as WordNet. The work also investigates various schemes for selecting negative samples. The evaluation consists of hypernym detection on WordNet and graded lexical entailment, in the shape of HyperLex. This is good work: it is well written, the experiments are thorough and the proposed method is original and works well. Section 3 could use some more signposting. Especially for 3.3 it would be good to explain (either at the beginning of section 3, or the beginning of section 3.3) why these measures matter and what is going to be done with them. It's good that LEAR is mentioned and compared against, even though it was very recently published. Please do note that the authors' names are misspelled: Vuli\'c not Vulic, Mrk\v{s}i\'c instead of Mrksic. If I am not mistaken, the Vendrov WordNet test set is a set of positive pairs. I would like to see more details on how the evaluation is done here: presumably, the lower I set the threshold, the higher my score? Or am I missing something? It would be useful to describe exactly the extent to which supervision is used - the method only needs positive and negative links, and does not require any additional order information (i.e., WordNet strictly contains more information than what is being used). I don't see what Socher et al. (2013) has to do with the loss in equation (7). Or did they invent the margin loss? Word2gauss also evaluates on similarity and relatedness datasets. Did you consider doing that here too? "hypothesis proposed by Santus et al. which says" is not a valid reference.
iclr_2018_rkaT3zWCZ
Workshop track -ICLR 2018 BUILDING GENERALIZABLE AGENTS WITH A REALIS- TIC AND RICH 3D ENVIRONMENT Teaching an agent to navigate in an unseen 3D environment is a challenging task, even in the event of simulated environments. To generalize to unseen environments, an agent needs to be robust to low-level variations (e.g. color, texture, object changes), and also high-level variations (e.g. layout changes of the environment). To improve overall generalization, all types of variations in the environment have to be taken under consideration via different level of data augmentation steps. To this end, we propose House3D, a rich, extensible and efficient environment that contains 45,622 human-designed 3D scenes of visually realistic houses, ranging from single-room studios to multi-storied houses, equipped with a diverse set of fully labeled 3D objects, textures and scene layouts, based on the SUNCG dataset . The diversity in House3D opens the door towards scene-level augmentation, while the label-rich nature of House3D enables us to inject pixel-& task-level augmentations such as domain randomization (Tobin et al., 2017) and multi-task training. Using a subset of houses in House3D, we show that reinforcement learning agents trained with an enhancement of different levels of augmentations perform much better in unseen environments than our baselines with raw RGB input by over 8% in terms of navigation success rate. House3D is publicly available at
Building rich 3D environments where to run simulations is a very interesting area of research. Strengths: 1. The authors propose a virtual environment of indoor scenes having a much larger scale compared to similar interactive environments and access to multiple visual modalities. They also show how the number of available scenes greatly impacts generalization in navigation based tasks. 2. The authors provide a thorough analysis on the contribution of different feature types (Mask, Depth, RGB) towards the success rate of the goal task. The improvements and generalization brought by the segmentation and depth masks give interesting insights towards building new navigation paradigms for real-world robotics. Weaknesses: 1. The authors claim that the proposed environment allows for multiple applications and interactions, however from the description in section 3, the capacities of the simulator beyond navigation are unclear. The dataset proposed, Home3D, adds a number of functionalities over the SUNCG dataset. The SUNCG dataset provides a large number of 3D scanned houses. The most important contributions with respect to SUNCG are: - An efficient renderer: an important aspect. - Introducing physics: this is very interesting, unfortunately the contribution here is very small. Although I am sure the authors are planing to move beyond the current state of their implementation, the only physical constraint currently implemented is an occupancy rule and collision detection. This is not technically challenging. Therefore, the added novelty with respect to SUNCG is very limited. 2. The paper presents the proposed task as navigation from high level task description, but given that the instructions are fixed for a given target, there are only 5 possible instructions which are encoded as one-hot vectors. Given this setting, it is unclear the need for a gated attention mechanism. While this limited setting allows for a clear generalization analysis, it would have been good to study a setting with more complex instructions, allowing to evaluate instructions not seen during training. 3. While the authors make a good point showing generalization towards unseen scenes, it would have been good to also show generalization towards real scenarios, demonstrating the realistic nature of House3D and the advantages of using non-RGB features. 4. It would have been good to report an analysis on the number of steps performed by the agent before reaching its goal on the success cases. It seems to me that the continuous policy would be justified in this setting. Comments - It is unclear to me how the reward shaping addition helps generalize to unseen houses at test time, as suggested by the authors. - I miss a reference to (https://arxiv.org/pdf/1609.05143.pdf) beyond the AI-THOR environment, given that they also approach target driven navigation using an actor-critic approach. The paper proposes a new realistic indoor virtual environment, having a much larger number of scenes than similar environments. From the experiments shown, it seems that the scale increase, together with the availability of features such as Segmentation and Depth improve generalization in navigation tasks, which makes it a promising framework for future work on this direction. However, the task proposed seems too simple considering the power of this environment, and the models used to solve the task don’t seem to bring relevant novelties from previous approaches. (https://arxiv.org/pdf/1706.07230.pdf)
iclr_2018_BkUHlMZ0b
EVALUATING THE ROBUSTNESS OF NEURAL NET- WORKS: AN EXTREME VALUE THEORY APPROACH The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and computationally feasible for large neural networks. Experimental results on various networks, including ResNet, Inceptionv3 and MobileNet, show that (i) CLEVER is aligned with the robustness indication measured by the 2 and ∞ norms of adversarial examples from powerful attacks, and (ii) defended networks using defensive distillation or bounded ReLU indeed achieve better CLEVER scores. To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifier.
Summary ======== The authors present CLEVER, an algorithm which consists in evaluating the (local) Lipschitz constant of a trained network around a data point. This is used to compute a lower-bound on the minimal perturbation of the data point needed to fool the network. The method proposed in the paper already exists for classical function, they only transpose it to neural networks. Moreover, the lower bound comes from basic results in the analysis of Lipschitz continuous functions. Clarity ===== The paper is clear and well-written. Originality ========= This idea is not new: if we search for "Lipschitz constant estimation" in google scholar, we get for example Wood, G. R., and B. P. Zhang. "Estimation of the Lipschitz constant of a function." (1996) which presents a similar algorithm (i.e., estimation of the maximum slope with reverse Weibull). Technical quality ============== The main theoretical result in the paper is the analysis of the lower-bound on \delta, the smallest perturbation to apply on a data point to fool the network. This result is obtained almost directly by writing the bound on Lipschitz-continuous function | f(y)-f(x) | < L || y-x || where x = x_0 and y = x_0 + \delta. Comments: - Lemma 3.1: why citing Paulavicius and Zilinskas for the definition of Lipschitz continuity? Moreover, a Lipschitz-continuous function does not need to be differentiable at all (e.g. |x| is Lipschitz with constant 1 but sharp at x=0). Indeed, this constant can be easier obtained if the gradient exists, but this is not a requirement. - (Flaw?) Theorem 3.2 : This theorem works for fixed target-class since g = f_c - f_j for fixed g. However, once g = min_j f_c - f_j, this theorem is not clear with the constant Lq. Indeed, the function g should be g(x) = min_{k \neq c} f_c(x) - f_k(x). Thus its Lipschitz constant is different, potentially equal to L_q = max_{k} \| L_q^k \|, where L_q^k is the Lipschitz constant of f_c-f_k. If the theorem remains unchanged after this modification, you should clarify the proof. Otherwise, the theorem will work with the maximum over all Lipschitz constants but the theoretical result will be weakened. - Theorem 4.1: I do not see the purpose of this result in this paper. This should be better motivated. Numerical experiments ==================== Globally, the numerical experiments are in favor of the presented method. The authors should also add information about the time it takes to compute the bound, the evolution of the bound in function of the number of samples and the distribution of the relative gap between the lower-bound and the best adversarial example. Moreover, the numerical experiments look to be realized in the context of targeted attack. To show the real effectiveness of the approach, the authors should also show the effectiveness of the lower-bound in the context of non-targeted attack. ####################################################### Post-rebuttal review --------------------------- Given the details the authors provided to my review, I decided to adjust my score. The method is simple and shows to be extremely effective/accurate in practice. Detailed answers: 1) Indeed, I was not aware that the paper only focuses on one dimensional functions. However, they still work with less assumption, i.e., with no differential functions. I was pointing out the similarities between their approach and your: the two algorithms (CLEVER and Slope) are basically the same, and using a limit you can go from "slope" to "gradient norm". In any case, I have read the revision and the additional numerical experiment to compare Clever with their method is a good point. 2) " Overall, our analysis is simple and more intuitive, and we further facilitate numerical calculation of the bound by applying the extreme value theory in this work. " This is right. I am just surprised is has not been done before, since it requires only few lines of derivation. I searched a bit but it is not possible to find any kind of similar results. Moreover, this leads to good performances, so there is no needs to have something more complex. 3) "The usual Lipschitz continuity is defined in terms of L2 norm and the extension to an arbitrary Lp norm is not straightforward" Indeed, people usually use the Lipschitz continuity using the L2norm, but the original definition is wider. Quickly, if you have a differential, scalar function from a space E -> R, then the gradient is a function from space E to E*, the dual of the space E. Let || . || the norm of space E. Then, || . ||* is the dual norm of ||.||, and also the norm of E*. In that case, Lipschitz continuity writes f(x)-f(y) <= L || x-y ||, with L >= max_{x in E*} || f'(x) ||* In the case where || . || is an \ell-p norm, then || . ||* is an \ell-q norm; with 1/p+1/q = 1. If you are interested, there is a clear and concise explanation in the introduction of this paper: Accelerating the cubic regularization of Newton’s method on convex problems, by Yurii Nesterov. I have no additional remarks for 4) -> 9), since everything is fixed in the new version of the paper.
iclr_2018_HyunpgbR-
Reinforcement learning in environments with large state-action spaces is challenging, as exploration can be highly inefficient. Even if the dynamics are simple, the optimal policy can be combinatorially hard to discover. In this work, we propose a hierarchical approach to structured exploration to improve the sample efficiency of on-policy exploration in large state-action spaces. The key idea is to model a stochastic policy as a hierarchical latent variable model, which can learn low-dimensional structure in the state-action space, and to define exploration by sampling from the low-dimensional latent space. This approach enables lower sample complexity, while preserving policy expressivity. In order to make learning tractable, we derive a joint learning and exploration strategy by combining hierarchical variational inference with actor-critic learning. The benefits of our learning approach are that 1) it is principled, 2) simple to implement, 3) easily scalable to settings with many actions and 4) easily composable with existing deep learning approaches. We demonstrate the effectiveness of our approach on learning a deep centralized multi-agent policy, as multi-agent environments naturally have an exponentially large state-action space. In this setting, the latent hierarchy implements a form of multi-agent coordination during exploration and execution (MACE). We demonstrate empirically that MACE can more efficiently learn optimal policies in challenging multi-agent games with a large number (∼ 20) of agents, compared to conventional baselines. Moreover, we show that our hierarchical structure leads to meaningful agent coordination.
This paper proposes an approach to improve exploration in multiagent reinforcement learning by allowing the policies of the individual agents to be conditioned on an external coordination signal \lambda. In order to find such parametrized policies, the approach combines deep RL with a variational inference approach (ELBO optimization). The paper presents an empirical evaluation, which seems encouraging, but that is also somewhat difficult to interpret given the lack of comparison to other state-of-the-art methods. Overall, the paper seems interesting, but (in addition to the not completely convincing empirical evaluation), it has two main weaknesses: lack of clarity and grounding in related literature. =Issues with clarity= -"This problem has two equivalent solutions". This is not so clear; depending on the movement of the preys it might well be that the optimal solution will switch to the other prey in certain cases? -It is not clear what is really meant with the term "structured exploration". It just seems to mean 'improved'? -It is not clear that the improvements are due to exploration; my feeling is that is is due to improved statistical strength on a more abstract state feature (which is learned), not unlike: Geramifard, Alborz, et al. "Online discovery of feature dependencies." Proceedings of the 28th International Conference on Machine Learning (ICML-11). 2011. However, there is no clear indication that there is an improved exploration policy. -The problem setting is not quite clear: The paper first introduces "multi-agent RL", which seems to correspond to a "stochastic game" (also "Markov game"), but then moves on to restrict to the "fully cooperative setting" (which would make it a "Multiagent MDP", Boutilier '96). It subsequently says it deals only with deterministic problems (which would reduce the problem further to a learning version of a multiagent classical planning problem), but in the experiments do consider stochastically moving preys. -The paper says the problem is fully observable, but fails to make explicit if this is *individually* fully observable, or jointly. I am assuming the former, but is it not clear how the agents observe this full state in the experimental evaluation. This is actually a crucial confusion, as it completely changes the interpretation of what the approach does: in the individually observable case, the approach is adding a redundant source of information which is more abstract and thus seems to facilitate faster learning. In the latter case, where agents would have individual observations, it is actually providing the agents with more information. As such, I would really encourage the authors to better define the task they are considering. E.g., by building on the taxonomies of problems that researchers have developed in the community focusing on decentralized POMDPs, such as: Goldman, Claudia V., and Shlomo Zilberstein. "Decentralized control of cooperative systems: Categorization and complexity analysis." (2004). -"Compared to the single-agent RL setting, multi-agent RL poses unique difficulties. A central issue is the exploration-exploitation trade-off" That now in particular happens to be a central issue in single agent RL too. -"Finding the true posteriors P (λ t |s t ) ∝ P (s t |λ t )P (λ t ) is intractable in general" The paper did not explain how this inference task is required to solve the RL problem. -In general, I found the technical description impossible to follow, even after carefully looking at the appending. For instance, (also) there the term P (λ |s ) is suddenly introduced without explaining what the term exactly is? Why is the term P(a|λ) not popping up here? That also needs to be optimized, right? I suppose \phi is the parameter vector of the variational approximation, but it is never really stated. The various shorthand notations introduced for clarity do not help at all, but only make the formulas very cryptic. -The main text is not readable since definitions, e.g., L(Q_r,\tehta,\phi), that are in the appendix are now missing. -It is not clear to me how the second term of (10) is now estimated? -"Shared (shared actor-critic): agents share a deterministic hidden layer," What kind of layer is this exactly? How does it relate to \lambda ? -"The key difference is that this model does not sample from the shared hidden layer" Why would sampling help? Given that we are dealing with a fully observable multiagent MDP, there is no inherent need to randomize at all? (there should be a optimal deterministic joint policy?) -"There is shared information between the agents" What information is referred to exactly? Also: It is not quite clear if for these domains cloned would be better than completely independent learners (without shared weights)? -I can't seem to find anywhere what is the actual shape (or type? I am assuming a vector of reals) of the used \lambda. -in figure 5, rhs, what is being shown exactly? What do the colors mean? Why does there seem to be a \lambda *per* agent now? =Related work= I think the paper could/should be hugely improved in this respect. The idea of casting MARL as inference has also been considered by: Learning for Decentralized Control of Multiagent Systems in Large, Partially-Observable Stochastic Environments. M Liu, C Amato, EP Anesta, JD Griffith, JP How - AAAI, 2016 Stick-breaking policy learning in Dec-POMDPs M Liu, C Amato, X Liao, L Carin, JP How International Joint Conference on Artificial Intelligence (IJCAI) 2015 Wu, F.; Zilberstein, S.; and Jennings, N. R. 2013. Monte-carlo expectation maximization for decentralized POMDPs. In Proc. of the 23rd Int’l Joint Conf. on Artificial Intelligence (IJCAI- 13). I do not think that these explicitly make use of a mechanism to coordinate the policies, since they address to true Dec-POMDP setting where each agent only gets its own observations, but in the Dec-POMDP literature, there also is the notion of 'correlation device', which is an additional controller (say corresponding to a dummy agent), which of which the states can be observed by other agents and used to condition their actions on: Bernstein DS, Hansen EA, Zilberstein S. Bounded policy iteration for decentralized POMDPs. InProceedings of the nineteenth international joint conference on artificial intelligence (IJCAI) 2005 Jun 6 (pp. 52-57). (and clearly this could be directly included in the aforementioned learning approaches). This notion of a correlation device also highlights to potential relation to methods to learn/compute correlated equilibria. E.g.,: Greenwald A, Hall K, Serrano R. Correlated Q-learning. In ICML 2003 Aug 21 (Vol. 3, pp. 242-249). A different connection between MARL and inference can be found in: Zhang, Xinhua and Aberdeen, Douglas and Vishwanathan, S. V. N., "Conditional Random Fields for Multi-agent Reinforcement Learning", in (New York, NY, USA: ACM, 2007), pp. 1143--1150. The idea of doing something hierarchical of course makes sense, but also here there are a number of related papers: -putting "hierarchical multiagent" in google scholar finds works by Ghavamzadeh et al., Saira & Mahadevan, etc. -Victor Lesser has pursued coordination for better exploration with a number of students. I suppose that Guestrin et al.'s classical paper: Guestrin, Carlos, Michail Lagoudakis, and Ronald Parr. "Coordinated reinforcement learning." ICML. Vol. 2. 2002. would deserve a citation, and the MARL field is moving ahead fast, an explanation of the differences with COMA: Counterfactual Multi-Agent Policy Gradients J Foerster, G Farquhar, T Afouras, N Nardelli, S Whiteson AAAI 2018 is probably also warranted.
iclr_2018_rkmoiMbCb
Due to the success of residual networks (resnets) and related architectures, shortcut connections have quickly become standard tools for building convolutional neural networks. The explanations in the literature for the apparent effectiveness of shortcuts are varied and often contradictory. We hypothesize that shortcuts work primarily because they act as linear counterparts to nonlinear layers. We test this hypothesis by using several variations on the standard residual block, with different types of linear connections, to build small (100k-1.2M parameter) image classification networks. Our experiments show that other kinds of linear connections can be even more effective than the identity shortcuts. Our results also suggest that the best type of linear connection for a given application may depend on both network width and depth.
This paper performs an analysis of shortcut connections in ResNet-like architectures. The authors hypothesize that the success of shortcut connections comes from the combination of linear and non-linear features at each layer and propose to substitute the identity shortcuts with a convolutional one (without non-linearity). This alternative is referred to as tandem block. Experiments are performed on a variety of image classification tasks such as CIFAR-10, CIFAR-100, SVHN and Fashion MNIST. The paper is well structured and easy to follow. The main contribution of the paper is the comparison between identity skip connections and skip connections with one convolutional layer. My main concerns are related to the contribution of the paper and experimental pipeline followed to perform the comparison. First, the idea of having convolutional shortcuts was already explored in the ResNet paper (see https://arxiv.org/pdf/1603.05027.pdf). Second, given Figures 3-4-5-6, it would seem that the authors are monitoring the performance on the test set during training. Moreover, results on Table 2 are reported as the ones with “the highest test accuracy achieved with each tandem block”. Could the authors give more details on how the hyperparameters of the architectures/optimization were chosen and provide more information on how the best results were achieved? In section 3.5, the authors mention that batchnorm was not useful in their experiments, and was more sensitive to the learning rate value. Do the authors have any explanation/intuition for this behavior? In section 4, authors claim that their results are competitive with the best published results for a similar number of parameters. It would be beneficial to add the mentioned best performing models in Table 2 to back this statement. Moreover, it seems that in some cases such as SVHN the differences between all the proposed blocks are too minor to draw any strong conclusions. Could those differences be due to, for example, luck in picking the initialization seed? How many times was each experiment run? If more than once, what was the std? The experiments were performed on relatively shallow networks (8 to 26 layers). I wonder how the conclusions drawn scale to much deeper networks (of 100 layers for example) and on larger datasets such as ImageNet. Figures 3-5 are not referenced nor discussed in the text. Following the design of the tandem blocks proposed in the paper, I wonder why the tandem block B3x3(2,w) was not included. Finally, it might be interesting to initialize the convolutions in the shortcut connections with the identity, and check what they have leant at the end of the training. Some typos that the authors might want to fix: - backpropegation -> backpropagation (Introduction, paragraph 3) - dropout is a kind of regularization as well (Introduction, second to last paragraph) - nad -> and (Sect 3.1. paragraph 1)
iclr_2018_HkbJTYyAb
Bayesian posterior inference is prevalent in various machine learning problems. Variational inference provides one way to approximate the posterior distribution, however its expressive power is limited and so is the accuracy of resulting approximation. Recently, there has a trend of using neural networks to approximate the variational posterior distribution due to the flexibility of neural network architecture. One way to construct flexible variational distribution is to warp a simple density into a complex by normalizing flows, where the resulting density can be analytically evaluated. However, there is a trade-off between the flexibility of normalizing flow and computation cost for efficient transformation. In this paper, we propose a simple yet effective architecture of normalizing flows, ConvFlow, based on convolution over the dimensions of random input vector. Experiments on synthetic and real world posterior inference problems demonstrate the effectiveness and efficiency of the proposed method.
In this paper, the authors propose a type of Normalizing Flows (Rezende and Mohamed, 2015) for Variational Autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) they call Convolutional Normalizing Flows. More particularly, it aims at extending on the Planar Flow scheme proposed in Rezende and Mohamed (2015). The authors notice an improvement through their method over Normalizing Flows, IWAE with diagonal gaussian approximation, and standard Variational Autoencoders. As noted by AnonReviewer3, several baselines are missing. But the authors partly address that issue in the comment section for the MNIST dataset. The requirement of h being bijective seems wrong. For example, if h was a rectifier nonlinearity in the zero-derivative regime, the Jacobian determinant of the ConvFlow would be 1. More importantly, the main issue is that this paper might need to highlight the fundamental difference between their proposed method and Inverse Autoregressive Flow (Kingma et al., 2016). The proposed connectivity pattern proposed for the convolution in order to make the Jacobian determinant computation is exactly the same as Inverse Autoregressive Flow and the authors seems to be aware of the order dependence of their architecture which is every similar to autoregressive models. This presentation of the paper can be misleading concerning the true innovation in the model trained. Proposing ConvFlow as a type of Inverse Autoregressive Flow would be more accurate and would allow to highlight better the innovation of the work. Since this work does not offer additional significant insight over Inverse Autoregressive Flow, its value should be on demonstrating the efficiency of the proposed method. MNIST and Omniglot seems insufficient for that purpose given currently published work. In the current state, I can't recommend the paper for acceptance. Danilo Jimenez Rezende, Shakir Mohamed: Variational Inference with Normalizing Flows. ICML 2015 Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra: Stochastic Back-propagation and Variational Inference in Deep Latent Gaussian Models. ICML 2014 Diederik P. Kingma, Max Welling: Auto-Encoding Variational Bayes. ICLR 2014 Diederik P. Kingma, Tim Salimans, Rafal Józefowicz, Xi Chen, Ilya Sutskever, Max Welling: Improving Variational Autoencoders with Inverse Autoregressive Flow. NIPS 2016
iclr_2018_HkL7n1-0b
WASSERSTEIN AUTO-ENCODERS We propose the Wasserstein Auto-Encoder (WAE)-a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE) (Kingma & Welling, 2014). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE) (Makhzani et al., 2016). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.
This very well written paper covers the span between W-GAN and VAE. For a reviewer who is not an expert in the domain, it reads very well, and would have been of tutorial quality if space had allowed for more detailed explanations. The appendix are very useful, and tutorial paper material (especially A). While I am not sure description would be enough to reproduce and no code is provided, every aspect of the architecture, if not described, if referred as similar to some previous work. There are also some notation shortcuts (not explained) in the proof of theorems that can lead to initial confusion, but they turn out to be non-ambiguous. One that could be improved is P(P_X, P_G) where one loses the fact that the second random variable is Y. This work contains plenty of novel material, which is clearly compared to previous work: - The main consequence of the use of Wasserstein distance is the surprisingly simple and useful Theorem 1. I could not verify its novelty, but this seems to be a great contribution. - Blending GAN and auto-encoders has been tried in the past, but the authors claim better theoretical foundations that lead to solutions that do not rquire min-max - The use of MMD in the context of GANs has also been tried. The authors claim that their use in the latent space makes it more practival The experiments are very convincing, both numerically and visually. Source of confusion: in algorithm 1 and 2, \tilde{z} is "sampled" from Q_TH(Z|xi), some one is lead to believe that this is the sampling process as in VAEs, while in reality Q_TH(Z|xi) is deterministic in the experiments.
iclr_2018_rJFOptp6Z
Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick.
Summary: The manuscript presents experiments on distilling knowledge from a face classification model to student models for face alignment and verification. By selecting a good initialization strategy and guidelines for selecting appropriate targets for non-classification tasks, the authors achieve improved performance, compared to networks trained from scratch or with different initialization strategies. Review: The paper seems to be written in a rush. I am not sure about the degree of novelty, as pretraining with domain-related data instead of general-purpose ImageNet data has been done before, Liu et al. (2014), for example pretrain a CNN on face classification to be used for emotion recognition. Admitted, knowledge transfer from classification to regression and retrieval tasks is not very common yet, except via pretraining on ImageNet, followed by fine-tuning on the target task. My main concern is with the presentation of the paper. It is very hard to follow! Two reasons are that it has too many grammatical mistakes and that very often a “simple trick” or a “common trick” is mentioned instead of using a descriptive name for the method used. Here are a few points that might help improving the work: 1) Many kind of empty phrases are repeated all over the paper, e.g. the reader is teased with mention of a “simple trick” or a “common trick”. I don’t think the phrase “breaking point”, that is repeated a couple of times, is correctly used (see https://www.merriam-webster.com/dictionary/breaking%20point for a defininition). 2) Section 4.1 does not explain the initialization but just describes motivation and notation. 3) Clarity of the approach: Using the case of alignment as an example, do you first pretrain both the teacher and student on classification, then finetune the teacher on alignment before the distillation step? 4) Table 1 mentions Fitnets, but cites Ba & Caruana (2014) instead of Romero et al. (2015) 5) The “experimental trick” you mention for setting alpha and beta, seems to be just validation, comparing different settings and picking the one yielding the highest improvements. On what partition of the data are you doing this hyperparameter selection? 6) The details of the architectures are missing, e.g. exactly what changes do you make to the architecture, when you change the task from classification to alignment or verification? What exactly is the “hidden layer” in that architecture? 7) Minor: Usually there is a space before parentheses (many citations don’t have one) In its current form, I cannot recommend the manuscript for acceptance. I get the impression that the experimental work might be of decent quality, but the manuscript fails to convey important details of the method, of the experimental setup and in the interpretation of the results. The overall quality of the write-up has to be significantly improved. References: Liu, Mengyi, Ruiping Wang, Shaoxin Li, Shiguang Shan, Zhiwu Huang, and Xilin Chen. "Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild." In Proceedings of the 16th International Conference on Multimodal Interaction, pp. 494-501. ACM, 2014.
iclr_2018_rJl63fZRb
Published as a conference paper at ICLR 2018 PARAMETRIZED HIERARCHICAL PROCEDURES FOR NEURAL PROGRAMMING Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism. Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs. The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability. To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs). A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller. We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs. We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations.
I thank the authors for their updates and clarifications. I stand by my original review and score. I think their method and their evaluation has some major weaknesses, but I think that it still provides a good baseline to force work in this space towards tasks which can not be solved by simpler models like this. So while I'm not super excited about the paper I think it is above the accept threshold. -------------------------------------------------------------------------- This paper extends an existing thread of neural computation research focused on learning resuable subprocedures (or options in RL-speak). Instead of simply input and output examples, as in most of the work in neural computation, they follow in the vein of the Neural Programmer-Interpreter (Reed and de Freitas, 2016) and Li et. al., 2017, where the supervision contains the full sequence of elementary actions in the domain for all samples, and some samples also contain the hierarchy of subprocedure calls. The main focus of their work is learning from fewer fully annotated samples than prior work. They introduce two new ideas in order to enable this: 1. They limit the memory state of each level in the program heirarchy to simply a counter indicating the number of elementary actions/subprocedure calls taken so far (rather than a full RNN embedded hidden/cell state as in prior work). They also limit the subprocedures such that they do not accept any arguments. 2. By considering this very limited set of possible hidden states, they can compute the gradients using a dynamic program that seems to be more accurate than the approximate dynamic program used in Li et. al., 2017. The main limitation of the work is this extremely limited memory state, and the lack of arguments. Without arguments, everything that parameterizes the subprocedures must be in the visible world state. In both of their domains, this is true, but this places a significant limitation on the algorithms which can be modeled with this technique. Furthermore, the limited memory state means that the only way a subprocedure can remember anything about the current observation is to call a different subprocedure. Again, their two evalation tasks fit into this paradigm, but this places very significant limitations on the set of applicable domains. I would have like to see more discussion on how constraining these limitations would be in practice. For example, it seems it would be impossible for this architecture to perform the Nanocraft task if the parameters of the task (width, height, etc.) were only provided in the first observation, rather than every observation. None-the-less I think this work is an important step in our understanding of the learning dynamics for neural programs. In particular, while the RNN hidden state memory used by the prior work enables the learning of more complicted programs *in theory*, this has not been shown in practice. So, it's possible that all the prior work is doing is learning to approixmate a much simpler architecture of this form. Specifically, I think this work can act as a great base-line by forcing future work to focus on domains which cannot be easily solved by a simpler architecture of this form. This limitation will also force the community to think about which tasks require a more complicated form of memory, and which can be solved with a very simple memory of this form. I also have the following additional concerns about the paper: 1. I found the current explanation of the algorithm to be very difficult to understand. It's extremely difficult to understand the core method without reading the appendix, and even with the appendix I found the explanation of the level-by-level decomposition to be too terse. 2. It's not clear how their gradient approximation compares to the technique used by Li et. al. They obviously get better results on the addition and Nanocraft domains, but I would have liked a more clear explanation and/or some experiments providing insights into what enables these improvements (or at least an admission by the authors that they don't really understand what enabled the performance improvements).
iclr_2018_H1WgVz-AZ
Published as a conference paper at ICLR 2018 LEARNING APPROXIMATE INFERENCE NETWORKS FOR STRUCTURED PREDICTION ABSTRACT Structured prediction energy networks (SPENs; Belanger & McCallum 2016) use neural network architectures to define energy functions that can capture arbitrary dependencies among parts of structured outputs. Prior work used gradient descent for inference, relaxing the structured output to a set of continuous variables and then optimizing the energy with respect to them. We replace this use of gradient descent with a neural network trained to approximate structured argmax inference. This "inference network" outputs continuous values that we treat as the output structure. We develop large-margin training criteria for joint training of the structured energy function and inference network. On multi-label classification we report speed-ups of 10-60x compared to (Belanger et al., 2017) while also improving accuracy. For sequence labeling with simple structured energies, our approach performs comparably to exact inference while being much faster at test time. We then demonstrate improved accuracy by augmenting the energy with a "label language model" that scores entire output label sequences, showing it can improve handling of long-distance dependencies in part-of-speech tagging. Finally, we show how inference networks can replace dynamic programming for test-time inference in conditional random fields, suggestive for their general use for fast inference in structured settings.
= Quality = Overall, the authors do a good job of placing their work in the context of related research, and employ a variety of non-trivial technical details to get their methods to work well. = Clarity = Overall, the exposition regarding the method is good. I found the setup for the sequence tagging experiments confusing, tough. See more comments below. = Originality / Significance = The paper presents a clever idea that could help make SPENs more practical. The paper's results also suggest that we should be thinking more broadly about how to using complicated structured distributions as teachers for model compression. = Major Comment = I'm concerned by the quality of your results and the overall setup of your experiments. In particular, the principal contribution of the sequence tagging experiments seems top be different than what is advertised earlier on in the paper. Most of your empirical success is obtained by taking a pretrained CRF energy function and using this as a teacher model to train a feed-forward inference network. You have have very few experiments using a SPEN energy function parametrization that doesn't correspond to a CRF, even though you could have used an arbitrary convnet, RNN, etc. The one exception is when you use the tag language model. This is a good idea, but it is pretrained, not trained using the saddle-point objective you introduce. In fact, you don't have any results demonstrating that the saddle-point approach is better than simpler alternatives. It seems that you could have written a very different paper about model compression with CRFs that would have been very interesting and you could've have used many of the same experiments. It's unclear why SPENs are so important. The idea of amortizing inference is perhaps more general. My recommendation is that you either rebrand the paper to be more about general methods for amortizing structured prediction inference using model compression or do more fine-grained experiments with SPENs that demonstrate empirical gains that leverage their flexible deep-network-based energy functions. = Minor Comments = * You should mention 'Energy Based GANs" * I don't understand "This approach performs backpropagation through each step of gradient descent, permitting more stable training but also evidently more overfitting." Why would it overfit more? Simply because training was more stable? Couldn't you prevent overfitting by regularizing more? * You spend too much space talking about specific hyperparameter ranges, etc. This should be moved to the appendix. You should also add a short summary of the TLM architecture to the main paper body. * Regarding your footnote discussing using a positive vs. negative sign on the entropy regularization term, I recommend checking out "Regularizing neural networks by penalizing confident output distributions." * You should add citations for the statement "In these and related settings, gradient descent has started to be replaced by inference networks." * I didn't find Table 1 particularly illuminating. All of the approaches seem to perform about the same. What conclusions should I make from it? * Why not use KL divergence as your \Delta function? * Why are the results in Table 5 on the dev data? * I was confused by Table 4. First of all, it took me a very long time to figure out that the middle block of results corresponds to taking a pretrained CRF energy and amortizing inference by training an inference network. This idea of training with a standard loss (conditional log lik.) and then amortizing inference post-hoc was not explicitly introduced as an alternative to the saddle point objective you put forth earlier in the paper. Second, I was very surprised that the inference network outperformed Viterbi (89.7 vs. 89.1 for the same CRF energy). Why is this? * I'm confused by the difference between Table 6 and Table 4? Why not just include the TLM results in Table 4?
iclr_2018_HytSvlWRZ
Over the past decade a wide spectrum of machine learning models have been developed to model the neurodegenerative diseases, associating biomarkers, especially non-intrusive neuroimaging markers, with key clinical scores measuring the cognitive status of patients. Multi-task learning (MTL) has been extensively explored in these studies to address challenges associated to high dimensionality and small cohort size. However, most existing MTL approaches are based on linear models and suffer from two major limitations: 1) they cannot explicitly consider upper/lower bounds in these clinical scores; 2) they lack the capability to capture complicated non-linear effects among the variables. In this paper, we propose the Subspace Network, an efficient deep modeling approach for non-linear multi-task censored regression. Each layer of the subspace network performs a multi-task censored regression to improve upon the predictions from the last layer via sketching a low-dimensional subspace to perform knowledge transfer among learning tasks. Under mild assumptions, for each layer the parametric subspace can be recovered using only one pass of training data. Empirical results demonstrate that the proposed subspace network quickly picks up correct parameter subspaces, and outperforms state-of-the-arts in predicting neurodegenerative clinical scores using information in brain imaging.
This work proposes a multi task learning framework for the modeling of clinical data in neurodegenerative diseases. Differently from previous applications of machine learning in neurodegeneration modeling, the proposed approach models the clinical data accounting for the bounded nature of cognitive tests scores. The framework is represented by a feed-forward deep architecture analogous to a residual network. At each layer a low-rank constraint is enforced on the linear transformation, while the cost function is specified in order to differentially account for the bounds of the predicted variables. The idea of explicitly accounting for the boundedness of clinical scores is interesting, although the assumption of the proposed model is still incorrect: clinical scores are defined on discrete scales. For this reason the Gaussian assumption for the cost function used in the method is still not appropriate for the proposed application. Furthermore, while being the main methodological drive of this work, the paper does not show evidence about improved predictive performance and generalisation when accounting for the boundedness of the regression targets. The proposed algorithm is also generally compared with respect to linear methods, and the authors could have provided a more rigorous benchmark including standard non-linear prediction approaches (e.g. random forests, NN, GP, …). Overall, the proposed methods seems to provide little added value to the large amount of predictive methods proposed so far for prediction in neurodegenerative disorders. Moreover, the proposed experimental paradigm appears flawed. What is the interest of predicting baseline (or 6 months at best) cognitive scores (relatively low-cost and part of any routine clinical assessment) from brain imaging data (high-cost and not routine)? Other remarks. - In section 2.2 and 4 there is some confusion between iteration indices and samples indices “i”. - Contrarily to what is stated in the introduction, the loss functions proposed in page 3 (first two formulas) only accounts for the lower bound of the predicted variables. - Figure 2, synthetic data. The scale of the improvement of the subspace difference is quite tiny, in the order of 1e-2 when compared to U, and of 1e-5 across iterations. The loss function of Figure 2.b also does not show a strong improvement across iterations, while indicating a rather large instability of the optimisation procedure. These aspects may be a sign of convergence issues. - The dimensionality of the subspace representation importantly depends on the choice of the rank R of U and V. This is a crucial parameters that is however not discussed nor analysed in the paper. - The synthetic example of page 7 is quite misleading and potentially biased towards the proposed model. The authors are generating the synthetic data according to the model, and it is thus not surprising that they managed to obtain the best performance. In particular, due to the nonlinear nature of (1), all the competing linear models are expected to perform poorly in this kind of setting. - The computation time for the linear model shown in Table 3 is quite surprising (~20 minutes for linear regression of 5k observations). Is there anything that I am missing?
iclr_2018_HkUR_y-RZ
Published as a conference paper at ICLR 2018 SEARNN: TRAINING RNNS WITH GLOBAL-LOCAL LOSSES We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task.
This paper proposes an adaptation of the SEARN algorithm to RNNs for generating text. In order to do so, they discuss various issues on how to scale the approach to large output vocabularies by sampling which actions the algorithm to explore. Pros: - Good literature review. But the future work on bandits is already happening: Paper accepted at ACL 2017: Bandit Structured Prediction for Neural Sequence-to-Sequence Learning. Julia Kreutzer, Artem Sokolov, Stefan Riezler. Cons: - The key argument of the paper is that SEARNN is a better IL-inspired algorithm than the previously proposed ones. However there is no direct comparison either theoretical or empirical against them. In the examples on spelling using the dataset of Bahdanau et al. 2017, no comparison is made against their actor-critic method. Furthermore, given its simplicity, I would expect a comparison against scheduled sampling. - A lot of important experimental details are in the appendices and they differ among experiments. For example, while mixed rollins are used in most experiments, reference rollins are used in MT, which is odd since it is a bad option theoretically. Also, no details are given on how the mixing in the rollouts was tuned. Finally, in the NMT comparison while it is stated that similar architecture is used in order to compare fairly against previous work, this is not the case eventually, as it is acknowledged at least in the case of MIXER. I would have expected the same encoder-decoder architecture to have been used for all the methods considered. - the two losses introduced are not really new. The log-loss is just MLE, only assuming that instead of a fixed expert that always returns the same target, we have a dynamic one. Note that the notion of dynamic expert is present in the SEARN paper too. Goldberg and Nivre just adapted it to transition-based dependency parsing. Similarly, since the KL loss is the same as XENT, why give it a new name? - the top-k sampling method is essentially the same as the targeted exploration of Goodman et al. (2016) which the authors cite. Thus it is not a novel contribution. - Not sure I see the difference between the stochastic nature of SEARNN and the online one of LOLS mentioned in section 7. They both could be mini-batched similarly. Also, not sure I see why SEARNN can be used on any task, in comparison to other methods. They all seem to be equally capable. Minor comments: - Figure 1: what is the difference between "cost-sensitive loss" and just "loss"? - local vs sequence-level losses: the point in Ranzato et al and Wiseman & Rush is that the loss they optimizise (BLEU/ROUGE) do not decompose over the the predictions of the RNNs. - Can't see why SEARNN can help with the vanishing gradient problem. Seem to be rather orthogonal.
iclr_2018_rJm7VfZA-
Published as a conference paper at ICLR 2018 LEARNING PARAMETRIC CLOSED-LOOP POLICIES FOR MARKOV POTENTIAL GAMES Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource. We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards. Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments. We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions. We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP). This is useful since solving an OCP-which is a single-objective problem-is usually much simpler than solving the original set of coupled OCPs that form the game-which is a multiobjective control problem. This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms. We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game. We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game.
Summary: This paper studies multi-agent sequential decision making problems that belong to the class of games called Markov Potential Games (MPG). It considers finding the optimal policy within a parametric space of policies, which can be represented by a function approximator such as a DNN. A main contribution of this work is that it shows that for MPG, instead of solving a multi-objective optimization problem (Eq. 8), which is difficult, it is sufficient to solve a scalar-valued optimization problem (Eq. 16). Theorem 1 shows that under certain conditions on the reward function, the game is MPG. It also shows how one might find the potential function J, which is used in the single objective optimization problem. Finding J can be computationally expensive in general. So the paper provides some properties that lead to finding J easier. For example, obtaining J is easy if we have a cooperative game (Corollary 1) or the reward can be decomposed/decoupled in a certain way (Theorem 2). Evaluation: This is a well-written paper that studies an important problem, but I don’t think ICLR is the right venue for it. There is not much about (representation) learning in this work. The use of TRPO as an RL algorithm in the Experiment does not play a critical role in this work either. Aside this general comment, I have several other more specific comments. - There is a significant literature on the use of RL for multi-agent systems. The paper does not do a good job comparing and positioning with respect to them. For example, refer to the following recent paper and references therein: Perolat, Strub, et al., “Learning Nash Equilibrium for General-Sum Markov Games from Batch Data,” AISTATS, 2017. - If I understand correctly, the policies are considered to be functions from the state of the system to a continuous action. So it is a function, and not a probability distribution. This means that the space of considered policies correspond to the space of pure strategies. We know that for some games, the Nash equilibrium is a mixed strategy. Isn’t this a big limitation of this approach? - I am unclear how this approach can handle stochastic dynamics. For example, the optimization (P1) depends on the realization of (theta_i)_i. But this is not available. The dependence is not only in the objective, but also in the constraints, which makes things more difficult. I understand that in the experiments the authors used two models (either the average of random realization, or solving a different optimization for each realization), but none of them is an appropriate solution for a stochastic system. - How large is the MPG class? Is there any structural result that positions them compared to other Markov Games? For example, is the class of zero-sum games an example of MPG? - There is a comment close to the end of Section 5 that when there is no prior knowledge of the dynamics and the reward, one can use the proposed approach to learn PCL-NE by using any DRL. This is questionable because if the reward is not known, the conditions of Theorems 1 or 2 cannot be verifies, so it is not possible to use (P1) instead of (G2). - What comments can you make about the computational complexity? It seems that depending on the dynamics, the optimization problem P1 can be non-convex, hence computationally difficult to solve. - How is the work related to the following paper? Macua, Zazo, Zazo, “Learning in Constrained Stochastic Dynamic Potential Games,” ICASSP, 2016 ====== I updated the score based on the authors' rebuttal.
iclr_2018_H1mCp-ZRZ
ACTION-DEPENDENT CONTROL VARIATES FOR POL- ICY OPTIMIZATION VIA STEIN'S IDENTITY Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from the large variance issue on policy gradient estimation, which leads to poor sample efficiency during training. In this work, we propose a control variate method to effectively reduce variance for policy gradient methods. Motivated by the Stein's identity, our method extends the previous control variate methods used in REINFORCE and advantage actor-critic by introducing more general action-dependent baseline functions. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy gradient approaches.
The paper proposes action-dependent baselines for reducing variance in policy gradient, through the derivation based on Stein’s identity and control functionals. The method relates closely to prior work on action-dependent baselines, but explores in particular on-policy fitting and a few other design choices that empirically improve the performance. A criticism of the paper is that it does not require Stein’s identity/control functionals literature to derive Eq. 8, since it can be derived similarly to linear control variate and it has also previously been discussed in IPG [Gu et. al., 2017] as reparameterizable control variate. The derivation through Stein’s identity does not seem to provide additional insights/algorithm designs beyond direct derivation through reparameterization trick. The empirical results appear promising, and in particular in comparison with Q-Prop, which fits Q-function using off-policy TD learning. However, the discussion on the causes of the difference should be elaborated much more, as it appears there are substantial differences besides on-policy/off-policy fitting of the Q, such as: -FitLinear fits linear Q (through parameterization based on linearization of Q) using on-policy learning, rather than fitting nonlinear Q and then at application time linearize around the mean action. A closer comparison would be to use same locally linear Q function for off-policy learning in Q-Prop. -The use of on-policy fitted value baseline within Q-function parameterization during on-policy fitting is nice. Similar comparison should be done with off-policy fitting in Q-Prop. I wonder if on-policy fitting of Q can be elaborated more. Specifically, on-policy fitting of V seems to require a few design details to have best performance [GAE, Schulman et. al., 2016]: fitting on previous batch instead of current batch to avoid overfitting (this is expected for your method as well, since by fitting to current batch the control variate then depends nontrivially on samples that are being applied), and possible use of trust-region regularization to prevent V from changing too much across iterations. The paper presents promising results with direct on-policy fitting of action-dependent baseline, which is promising since it does not require long training iterations as in off-policy fitting in Q-Prop. As discussed above, it is encouraged to elaborate other potential causes that led to performance differences. The experimental results are presented well for a range of Mujoco tasks. Pros: -Simple, effective method that appears readily available to be incorporated to any on-policy PG methods without significantly increase in computational time -Good empirical evaluation Cons: -The name Stein control variate seems misleading since the algorithm/method does not rely on derivation through Stein’s identity etc. and does not inherit novel insights due to this derivation.
iclr_2018_BJk7Gf-CZ
Published as a conference paper at ICLR 2018 GLOBAL OPTIMALITY CONDITIONS FOR DEEP NEURAL NETWORKS We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.
Summary: The paper gives theoretical results regarding the existence of local minima in the objective function of deep neural networks. In particular: - in the case of deep linear networks, they characterize whether a critical point is a global optimum or a saddle point by a simple criterion. This improves over recent work by Kawaguchi who showed that each critical point is either a global minimum or a saddle point (i.e., none is a local minimum), by relaxing some hypotheses and adding a simple criterion to know in which case we are. - in the case of nonlinear network, they provide a sufficient condition for a solution to be a global optimum, using a function space approach. Quality: The quality is very good. The paper is technically correct and nontrivial. All proofs are provided and easy to follow. Clarity: The paper is very clear. Related work is clearly cited, and the novelty of the paper well explained. The technical proofs of the paper are in appendices, making the main text very smooth. Originality: The originality is weak. It extends a series of recent papers correctly cited. There is some originality in the proof which differs from recent related papers. Significance: The result is not completely surprising, but it is significant given the lack of theory and understanding of deep learning. Although the model is not really relevant for deep networks used in practice, the main result closes a question about characterization of critical points in simplified models if neural network, which is certainly interesting for many people.
iclr_2018_B1ZvaaeAZ
Published as a conference paper at ICLR 2018 WRPN: WIDE REDUCED-PRECISION NETWORKS For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory bandwidth and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN -wide reducedprecision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks.
This is a well-written paper with good comparisons to a number of earlier approaches. It focuses on an approach to get similar accuracy at lower precision, in addition to cutting down the compute costs. Results with 2-bit activations and 4-bit weights seem to match baseline accuracy across the models listed in the paper. Originality This seems to be first paper that consistently matches baseline results below int-8 accuracy, and shows a promising future direction. Significance Going down to below 8-bits and potentially all the way down to binary (1-bit weights and activations) is a promising direction for future hardware design. It has the potential to give good results at lower compute and more significantly in providing a lower power option, which is the biggest constraint for higher compute today. Pros: - Positive results with low precision (4-bit, 2-bit and even 1-bit) - Moving the state of the art in low precision forward - Strong potential impact, especially on constrained power environments (but not limited to them) - Uses same hyperparameters as original training, making the process of using this much simpler. Cons/Questions - They mention not quantizing the first and last layer of every network. How much does that impact the overall compute? - Is there a certain width where 1-bit activation and weights would match the accuracy of the baseline model? This could be interesting for low power case, even if the "effective compute" is larger than the baseline.
iclr_2018_SJzRZ-WCZ
Published as a conference paper at ICLR 2018 LATENT SPACE ODDITY: ON THE CURVATURE OF DEEP GENERATIVE MODELS Deep generative models provide a systematic way to learn nonlinear data distributions through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space. The nonlinearity of the generator implies that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and we demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalizes to other deep generative models.
The paper makes an important observation: the generating function of a generative model (deep or not) induces a (stochastic) Riemannian metric tensor on the latent space. This metric might be the correct way to measure distances in the latent space, as opposed to the Euclidean distance. While this seems obvious, I had actually always thought of the latent space as "unfolding" the data manifold as it exists in the output space. The authors propose a different view which is intriguing; however, they do not, to the best of my understand, give a definitive theoretical reason why the induced Riemannian metric is the correct choice over the Euclidean metric. The paper correctly identifies an important problem with the way most deep generative models evaluate variance. However the solution proposed seems ad-hoc and not particularly related to the other parts of the paper. While the proposed variance estimation (using RBF networks) might work in some cases, I would love to see (perhaps in future work) a much more rigorous treatment of the subject. Pros: 1. Interesting observation and mathematical development of a Riemannian metric on the latent space. 2. Good observation about the different roles of the mean and the variance in determining the geodesics: they tend to avoid areas of high variance. 3. Intriguing experiments and a good effort at visualizing and explaining them. I especially appreciate the interpolation and random walk experiments. These are hard to evaluate objectively, but the results to hint at the phenomena the authors describe when comparing Euclidean to Riemannian metrics in the latent space. Cons: 1. The part of the paper proposing new variance estimators is ad-hoc and is not experimented with rigorously, comparing it to other methods in terms of calibration for example. Specific comments: 1. To the best of my understanding eq. (2) does not imply that the natural distance in Z is locally adaptive. I think of eq (2) as *defining* a type of distance on Z, that may or may not be natural. One could equally argue that the Euclidean distance on z is natural, and that this distance is then pushed forward by f to some induced distance over X. 2. In the definition of paths \gamma, shouldn't they be parametrized by arc-length (also known as unit-speed)? How should we think of the curve \gamma(t^2) for example? 3. In Theorem 2, is the term "input dimension" appropriate? Perhaps "data dimension" is better? 4. I did not fully understand the role of the LAND model. Is this a model fit AFTER fitting the generative model, and is used to cluster Z like a GMM ? I would appreciate a clarification about the context of this model.
iclr_2018_rk6cfpRjZ
Published as a conference paper at ICLR 2018 LEARNING INTRINSIC SPARSE STRUCTURES WITHIN LONG SHORT-TERM MEMORY Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non-LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available 1 .
The authors propose a technique to compress LSTMs in RNNs by using a group Lasso regularizer which results in structured sparsity, by eliminating individual hidden layer inputs at a particular layer. The authors present experiments on unidirectional and bidirectional LSTM models which demonstrate the effectiveness of this method. The proposed techniques are evaluated on two models: a fairly large LSTM with ~66.0M parameters, as well as a more compact LSTM with ~2.7M parameters, which can be sped up significantly through compression. Overall this is a clearly written paper that is easy to follow, with experiments that are well motivated. To the best of my knowledge most previous papers in the area of RNN compression focus on pruning or compression of the node outputs/connections, but do not focus as much on reducing the computation/parameters within an RNN cell. I only have a few minor comments/suggestions which are listed below: 1. It is interesting that the model structure where the number of parameters is reduced to the number of ISSs chosen from the proposed procedure does not attain the same performance as when training with a larger number of nodes, with the group lasso regularizer. It would be interesting to conduct experiments for a range of \lambda values: i.e., to allow for different degrees of compression, and then examine whether the model trained from scratch with the “optimal” structure achieves performance closer to the ISS-based strategy, for example, for smaller amounts of compression, this might be the case? 2. In the experiment, the authors use a weaker dropout when training with ISS. Could the authors also report performance for the baseline model if trained with the same dropout (but without the group LASSO regularizer)? 3. The colors in the figures: especially the blue vs. green contrast is really hard to see. It might be nicer to use lighter colors, which are more distinct. 4. The authors mention that the thresholding operation to zero-out weights based on the hyperparameter \tau is applied “after each iteration”. What is an iteration in this context? An epoch, a few mini-batch updates, per mini-batch? Could the authors please clarify. 5. Clarification about the hyperparameter \tau used for sparsification: Is \tau determined purely based on the converged weight values in the model when trained without the group LASSO constraint? It would be interesting to plot a histogram of weight values in the baseline model, and perhaps also after the group LASSO regularized training. 6. Is the same value of \lambda used for all groups in the model? It would be interesting to consider the effect of using stronger sparsification in the earlier layers, for example. 7. Section 4.2: Please explain what the exact match (EM) and F1 metrics used to measure performance of the BIDAF model are, in the text. Minor Typographical/Grammatical errors: - Sec 1: “... in LSTMs meanwhile maintains the dimension consistency.” → “... in LSTMs while maintaining the dimension consistency.” - Sec 1: “... is public available” → “is publically available” - Sec 2: Please rephrase: “After learning those structures, compact LSTM units remain original structural schematic but have the sizes reduced.” - Sec 4.1: “The exactly same training scheme of the baseline ...” → “The same training scheme as the baseline ...”
iclr_2018_BJInMmWC-
Generative image models have made significant progress in the last few years, and are now able to generate low-resolution images which sometimes look realistic. However the state-of-theart models utilize fully entangled latent representations where small changes to a single neuron can effect every output pixel in relatively arbitrary ways, and different neurons have possibly arbitrary relationships with each other. This limits the ability of such models to generalize to new combinations or orientations of objects as well as their ability to connect with more structured representations such as natural language, without explicit strong supervision. In this work explore the synergistic effect of using partial natural language scene descriptions to help disentangle the latent entities visible an image. We present a novel neural network architecture called Generative Entity Networks, which jointly generates both the natural language descriptions and the images from a set of latent entities. Our model is based on the variational autoencoder framework and makes use of visual attention to identify and characterise the visual attributes of each entity. Using the Shapeworld dataset, we show that our representation both enables a better generative model of images, leading to higher quality image samples, as well as creating more semantically useful representations that improve performance over purely dicriminative models on a simple natural language yes/no question answering task.
**Summary** The paper proposes an extension of the attend, infer, repeat generative model of Eslami, 2016 and extends it to handle ``visual attribute descriptions. This straightforward extension is claimed to improve image quality and shown to improve performance on a previously introduced image caption ranking task. In general, the paper shows improvements on an image caption agreement task introduced in Kuhnle and Copestake, 2017. The paper seems to have weaknesses pertaining to the approach taken, clarity of presentation and comparison to baselines which mean that the paper does not seem to meet the acceptance threshold for ICLR. See more detailed points below in Weaknesses. **Strengths** I like the high-level motivation of the work, that one needs to understand and establish that language or semantics can help learn better representations for images. I buy the premise and think the work addresses an important issue. **Weakness** Approach: * A major limitation of the model seems to be that one needs access to both images and attribute vectors at inference time to compute representations which is a highly restrictive assumption (since inference networks are discriminative). The paper should explain how/if one can compute representations given just the image, for instance, say by not using amortized inference. The paper does propose to use an image-only encoder but that is intended in general as a modeling choice to explain statistics which are not captured by the attributes (in this case location and orientation as explained in the Introduction of the paper). Clarity: * Eqn. 5, LHS can be written more clearly as \hat{a}_k. * It would also be good to cite the following related work, which closely ties into the model of Eslami 2016, and is prior work: Efficient inference in occlusion-aware generative models of images, Jonathan Huang, Kevin Murphy. ICLR Workshops, 2016 * It would be good to clarify that the paper is focusing on the image caption agreement task from Kuhnle and Copestake, as opposed to generic visual question answering. * The claim that the paper works with natural language should be toned down and clarified. This is not natural language, firstly because the language in the dataset is synthetically generated and not “natural”. Secondly, the approach parses this “synthetic” language into structured tuples which makes it even less natural. Also, Page. 3. What does “partial descriptions” mean? * Section 3: It would be good to explicitly draw out the graphical model for the proposed approach and clarify how it differs from prior work (Eslami, 2016). * Sec. 3. 4 mentions that the “only image” encoder is used to obtain the representation for the image, but the “only image” encoder is expected to capture the “indescribable component” from the image, then how is the attribute information from the image captured in this framework? One cannot hope to do image caption association prediction without capturing the image attributes... *, In general, the writing and presentation of the model seem highly fragmented, and it is not clear what the specifics of the overall model are. For instance, in the decoder, the paper mentions for the first time that there are variables “z”, but does not mention in the encoder how the variables “z” were obtained in the first place (Sec. 3.1). For instance, it is also not clear if the paper is modeling variable length sequences in a similar manner to Eslami, 2016 or not, and if this work also has a latent variable [z, z_pres] at every timestep which is used in a similar manner to Eqn. 2 in Eslami, 2016. Sec. 3.4 “GEN Image Encoder” has some typo, it is not clear what the conditioning is within q(z) term. * Comparison to baselines: 1. How well does this model do against a baseline discriminative image caption ranking approach, similar to [D]? This seems like an important baseline to report for the image caption ranking task. 2. Another crucial baseline is to train the Attend, Infer, Repeat model on the ShapeWorld images, and then take the latent state inferred at every step by that model, and use those features instead of the features described in Sec. 3.4 “Gen Image Encoder” and repeat the rest of the proposed pipeline. Does the proposed approach still show gains over Attend Infer Repeat? 3. The results shown in Fig. 7 are surprising -- in general, it does not seem like a regular VAE would do so poorly. Are the number of parameters in the proposed approach and the baseline VAE similar? Are the choices of decoder etc. similar? Did the model used for drawing Fig. 7 converge? Would be good to provide its training curve. Also, it would be good to evaluate the AIR model from Eslami, 2016 on the same simple shapes dataset and show unconditional samples. If the claim from the work is true, that model should be just as bad as a regular VAE and would clearly establish that using language is helping get better image samples. * Page 2: In general the notion of separating the latent space into content and style, where we have labels for the “content” is an old idea that has appeared in the literature and should be cited accordingly. See [B] for an earlier treatment, and an extension by [A]. See also the Bivcca-private model of [C] which has “private” latent variables for vision similar to this work (this is relevant to Sec. 3.2.) References: [A]: Upchurch, Paul, Noah Snavely, and Kavita Bala. 2016. “From A to Z: Supervised Transfer of Style and Content Using Deep Neural Network Generators.” arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1603.02003. [B]: Kingma, Diederik P., Danilo J. Rezende, Shakir Mohamed, and Max Welling. 2014. “Semi-Supervised Learning with Deep Generative Models.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1406.5298. [C]: Wang, Weiran, Xinchen Yan, Honglak Lee, and Karen Livescu. 2016. “Deep Variational Canonical Correlation Analysis.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1610.03454. [D]: Kiros, Ryan, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. “Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1411.2539.
iclr_2018_S1EfylZ0Z
Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for highdimensional spaces, such as images. Inspired by recent successes in deep learning we propose a novel approach to anomaly detection using generative adversarial networks. Given a sample under consideration, our method is based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. We achieve stateof-the-art performance on standard image benchmark datasets and visual inspection of the most anomalous samples reveals that our method does indeed return anomalies.
In the paper, the authors proposed using GAN for anomaly detection. In the method, we first train generator g_\theta from a dataset consisting of only healthy data points. For evaluating whether the data point x is anomalous or not, we search for a latent representation z such that x \approx g_\theta(z). If such a representation z could be found, x is deemed to be healthy, and anomalous otherwise. For searching z, the authors proposed a gradient-descent based method that iteratively update z. Moreover, the authors proposed updating the parameter \theta of the generator g_\theta. The authors claimed that this parameter update is one of the novelty of their method, making it different from the method of Schlegl et al. (2017). In the experiments, the authors showed that the proposed method attained the best AUC on MNIST and CIFAR-10. In my first reading of the paper, I felt that the baselines in the experiments are too primitive. Specifically, for KDE and OC-SVM, a naive PCA is used to reduce the data dimension. Nowadays, there are several publicly available CNNs that are trained on large image datasets such as ImageNet. Then, one can use such CNNs as feature extractor, that will give better low dimensional expression of the data than the naive PCA. I believe that the performances of KDE and OC-SVM can be improved by using such feature extractors. Additionally, I found that some well-known anomaly detection methods are excluded from the comparison. In Emmott et al. (2013), which the authors referred as a related work, it was reported that Isolation Forest and Ensemble of GMMs performed well on several datasets (better than KDE and OC-SVM). It would be essential to add these methods as baselines to be compared with the proposed method. Overall, I think the experimental results are far from satisfactory. ### Response to Revision ### It is interesting to see that the features extracted from AlexNet are not helpful for anomaly detection. It would be interesting to see whether features extracted from middle layers are helpful or they are still useless. I greatly appreciate the authors for their extensive experiments as a response to my comments. However, I have decided to keep my score unchanged, as the additional experiments have shown that the performance of the proposed method is not significantly better than the other methods. In particular, in MNIST, GMM performed better.
iclr_2018_Sk9yuql0Z
Published as a conference paper at ICLR 2018 MITIGATING ADVERSARIAL EFFECTS THROUGH RAN- DOMIZATION Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at https: //github.com/cihangxie/NIPS2017_adv_challenge_defense.
The authors propose a simple defense against adversarial attacks, which is to add randomization in the input of the CNNs. They experiment with different CNNs and published adversarial training techniques and show that randomized inputs mitigate adversarial attacks. Pros: (+) The idea introduced is simple and flexible to be used for any CNN architecture (+) Experiments on ImageNet1k prove demonstrate its effectiveness Cons: (-) Experiments are not thorougly explained (-) Novelty is extremely limited (-) Some baselines missing The experimental section of the paper was rather confusing. The authors should explain the experiments and the settings in the table, as those are not very clear. In particular, it was not clear whether the defense model was trained with the input randomization layers? Also, in Tables 1-6, how was the target model trained? How do the training procedures of target vs. defense model differ? In those tables, what is the testing procedure for the target model and how does it compare to the defense model? The gap between the target and defense model in Table 4 (ensemble pattern attack scenario) shrinks for single step attack methods. This means that when the attacker is aware of the randomization parameters, the effect of randomization might diminish. A baseline that reports the performance when the attacker is fully aware of the randomization of the defender (parameters, patterns etc.) is missing but is very useful. While the experiments show that the randomization layers mitigate the effect of randomization attacks, it's not clear whether the effectiveness of this very simple approach is heavily biased towards the published ways of generating adversarial attacks and the particular problem (i.e. classification). The form of attacks studied in the paper is that of additive noise. But there is many types of attacks that could be closely related to the randomization procedure of the input and that could lead to very different results.
iclr_2018_r1vccClCb
We propose a novel unsupervised representation learning framework called neighbor-encoder in which domain knowledge can be trivially incorporated into the learning process without modifying the general encoder-decoder architecture. In contrast to autoencoder, which reconstructs the input data, neighbor-encoder reconstructs the input data's neighbors. The proposed neighbor-encoder can be considered as a generalization of autoencoder as the input data can be treated as the nearest neighbor of itself with zero distance. By reformulating the representation learning problem as a neighbor reconstruction problem, domain knowledge can be easily incorporated with appropriate definition of similarity or distance between objects. As such, any existing similarity search algorithms can be easily integrated into our framework. Applications of other algorithms (e.g., association rule mining) in our framework is also possible since the concept of "neighbor" is an abstraction which can be appropriately defined differently in different contexts. We have demonstrated the effectiveness of our framework in various domains, including images, time series, music, etc., with various neighbor definitions. Experimental results show that neighbor-encoder outperforms autoencoder in most scenarios we considered.
This paper presents a variant of auto-encoder that relaxes the decoder targets to be neighbors of a data point. Different from original auto-encoder, where data point x and the decoder output \hat{x} are forced to be close, the neighbor-encoder encourage the decoder output to be similar to the neighbors of the input data point. By considering the neighbor information, the decoder targets would have smaller intra-class distances, thus larger inter-class distances, which helps to learn better separated latent representation of data in terms of data clusters. The authors conduct experiments on several real but relative small-scale data sets, and demonstrate the improvements of learned latent representations by using neighbors as targets. The method of neighbor prediction is a simple and small modification of the original auto-encoder, but seems to provide a way to augment the targets such that intra-class distance of decoder targets can be tightened. Improvements in the conducted experiments seem significant compared to the most basic auto-encoder. Major issues: There are some unaddressed theoretical questions. The optimal solution to predict the set of neighbor points in mean-squared metric is to predict the average of those points, which is not well justified as the averaged image can easily fall off the data manifold. This may lead to a more blurry reconstruction when k increases, despite the intra-class targets are tight. It can also in turn harm the latent representation when euclidean neighbors are not actually similar (e.g. images in cifar10/imagenet that are not as simple as 10 digits). This seems to be a defect of the neighbor-encoder method and is not discussed in the paper. The data sets used in the experiments are relatively small and simple, larger-scale experiments should be conducted. The fluctuations in Figure 9 and 10 suggest the significant variances in the results. Also, more complicated data/images can decrease the actual similarities of euclidean neighbors, thus affecting the results. The baselines are weak. Only the most basic auto-encoder is compared, no additional variants or other data augmentation techniques are compared. It is possible other variants improve the basic auto-encoder in similar ways. Some results are not very well explained. It seems the performance increases monotonically as the number of neighbors increases (Figure 5, 9, 10). Will this continue or when will the performance decrease? I would expect it to decrease as the far away neighbors will be dissimilar. The authors can either attach the nearest neighbors figures or their statistics, and provide explanations on when and why the performance decrease is expected. Some notations are confusing and need to be improved. For example, X and Y are actually the same set of images, the separation is a bit confusing; y_i \in y in last paragraph of page 4 is incorrect, should use something like y_i in N(y).
iclr_2018_r17Q6WWA-
Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains. While CNNs are particularly wellsuited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant. Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning (MTL) problem. The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task. While recent results in the deep learning (DL) community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem. In this paper, we propose the Deep Collaboration Network (DCNet), a novel approach for connecting task-specific CNNs in a MTL framework. We define connectivity in terms of two distinct non-linear transformation blocks. One aggregates taskspecific features into global features, while the other merges back the global features with each task-specific network. Based on the observation that task relevance depends on depth, our transformation blocks use skip connections as suggested by residual network approaches, to more easily deactivate unrelated task-dependent features. To validate our approach, we employed facial landmark detection (FLD) datasets as they are readily amenable to MTL, given the number of tasks they include. Experimental results show that we can achieve up to 24.31% relative improvement in landmark failure rate over other state-of-the-art MTL approaches. We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related.
This paper proposes a multi-pathway neural network for facial landmark detection with multitask learning. In particular, each pathway corresponds to one task, and the intermediate features are fused at multiple layers. The fused features are added to the task-specific pathway using a residual connection (the input of the residual connection are the concatenation of the task-specific features and the fuse features). The residual connection allows each pathway to selectively use the information from other pathways and focus on its own task. This paper is well written. The proposed neural network architectures are reasonable. The residual connection can help each pathway to focus on its own task (suggested by Figure 8). This phenomenon is not guaranteed by the training objective but happens automatically due to the architecture, which is interesting. The proposed model outperforms several baseline models. On MTFL, when using the AlexNet, the improvement is significant; when using the ResNet18, the improvement is encouraging but not so significant. On AFLW (trained on MTFL), the improvements are significant in both cases. What is missing is the comparison with other methods (besides the baseline). For examples, it will be helpful to compare with existing non-multitask learning methods, like TCDCN (Zhang et al., 2014) (it seems to achieve 25% failure rate on AFLW, which is lower than the numbers in Figure 5), and multi-task learning method, like MTCNN (Zhang et al., 2016). It is important to show that proposed multitask learning method is useful in practice. In addition, many papers take the average error as the performance metric. Providing results in the average error can make the experiments more comprehensive. The proposed architecture is a bit huge. It scales linearly with the number of tasks, which is not quite preferable. It is also not straightforward to add new tasks to finetune a trained model. In Figure 5 (left), it is a bit weird that the pretrained model underperforms the nonpretrained one. I am likely to change the rating based on the comparison with other methods.
iclr_2018_ByqFhGZCW
Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes. It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters. Conversely, a defender can robustify the classifier by retraining if she has the adversarial examples. The cat-and-mouse game nature of attacks and defenses raises the question of the presence of equilibria in the dynamics. In this paper, we present a neural-network based attack class to approximate a larger but intractable class of attacks, and formulate the attacker-defender interaction as a zero-sum leader-follower game. We present sensitivity-penalized optimization algorithms to find minimax solutions, which are the best worst-case defenses against whitebox attacks. Advantages of the learning-based attacks and defenses compared to gradient-based attacks and defenses are demonstrated with MNIST and CIFAR-10.
The authors describe a mechanism for defending against adversarial learning attacks on classifiers. They first consider the dynamics generated by the following procedure. They begin by training a classifier, generating attack samples using FGSM, then hardening the classifier by retraining with adversarial samples, generating new attack samples for the retrained classifier, and repeating. They next observe that since FGSM is given by a simple perturbation of the sample point by the gradient of the loss, that the fixed point of the above dynamics can be optimized for directly using gradient descent. They call this approach Sens FGSM, and evaluate it empirically against the various iterates of the above approach. They then generalize this approach to an arbitrary attacker strategy given by some parameter vector (e.g. a neural net for generating adversarial samples). In this case, the attacker and defender are playing a minimax game, and the authors propose finding the minimax (or maximin) parameters using an algorithm which alternates between maximization and minimization gradient steps. They conclude with empirical observations about the performance of this algorithm. The paper is well-written and easy to follow. However, I found the empirical results to be a little underwhelming. Sens-FGSM outperforms the adversarial training defenses tuned for the “wrong” iteration, but it does not appear to perform particularly well with error rates well above 20%. How does it stack up against other defense approaches (e.g. https://arxiv.org/pdf/1705.09064.pdf)? Furthermore, what is the significance of FGSM-curr (FGSM-81) for Sens-FGSM? It is my understanding that Sens-FGSM is not trained to a particular iteration of the “cat-and-mouse” game. Why, then, does Sens-FGSM provide a consistently better defense against FGSM-81? With regards to the second part of the paper, using gradient methods to solve a minimax problem is not especially novel (i.e. Goodfellow et al.), thus I would liked to see more thorough experiments here as well. For example, it’s unlikely that the defender would ever know the attack network utilized by an attacker. How robust is the defense against samples generated by a different attack network? The authors seem to address this in section 5 by stating that the minimax solution is not meaningful for other network classes. However, this is a bit unsatisfying. Any defense can be *evaluated* against samples generated by any attacker strategy. Is it the case that the defenses fall flat against samples generated by different architectures? Minor Comments: Section 3.1, First Line. ”f(ul(g(x),y))” appears to be a mistake.
iclr_2018_rkrWCJWAW
Truncated Backpropagation Through Time (truncated BPTT, Jaeger (2005)) is a widespread method for learning recurrent computational graphs. Truncated BPTT keeps the computational benefits of Backpropagation Through Time (BPTT Werbos (1990)) while relieving the need for a complete backtrack through the whole data sequence at every step. However, truncation favors short-term dependencies: the gradient estimate of truncated BPTT is biased, so that it does not benefit from the convergence guarantees from stochastic gradient theory. We introduce Anticipated Reweighted Truncated Backpropagation (ARTBP), an algorithm that keeps the computational benefits of truncated BPTT, while providing unbiasedness. ARTBP works by using variable truncation lengths together with carefully chosen compensation factors in the backpropagation equation. We check the viability of ARTBP on two tasks. First, a simple synthetic task where careful balancing of temporal dependencies at different scales is needed: truncated BPTT displays unreliable performance, and in worst case scenarios, divergence, while ARTBP converges reliably. Second, on Penn Treebank character-level language modelling Mikolov et al. (2012), ARTBP slightly outperforms truncated BPTT. Backpropagation Through Time (BPTT) Werbos (1990) is the de facto standard for training recurrent neural networks. However, BPTT has shortcomings when it comes to learning from very long sequences: learning a recurrent network with BPTT requires unfolding the network through time for as many timesteps as there are in the sequence. For long sequences this represents a heavy computational and memory load. This shortcoming is often overcome heuristically, by arbitrarily splitting the initial sequence into subsequences, and only backpropagating on the subsequences. The resulting algorithm is often referred to as Truncated Backpropagation Through Time (truncated BPTT, see for instance Jaeger (2005)). This comes at the cost of losing long term dependencies. We introduce Anticipated Reweighted Truncated BackPropagation (ARTBP), a variation of truncated BPTT designed to provide an unbiased gradient estimate, accounting for long term dependencies. Like truncated BPTT, ARTBP splits the initial training sequence into subsequences, and only backpropagates on those subsequences. However, unlike truncated BPTT, ARTBP splits the training sequence into variable size subsequences, and suitably modifies the backpropagation equation to obtain unbiased gradients. Unbiasedness of gradient estimates is the key property that provides convergence to a local optimum in stochastic gradient descent procedures. Stochastic gradient descent with biased estimates, such as the one provided by truncated BPTT, can lead to divergence even in simple situations and even with large truncation lengths (Fig. 3). ARTBP is experimentally compared to truncated BPTT. On truncated BPTT failure cases, typically when balancing of temporal dependencies is key, ARTBP achieves reliable convergence thanks to unbiasedness. On small-scale but real world data, ARTBP slightly outperforms truncated BPTT on the test case we examined. ARTBP formalizes the idea that, on a day-to-day basis, we can perform short term optimization, but must reflect on long-term effects once in a while; ARTBP turns this into a provably unbiased overall gradient estimate. Notably, the many short subsequences allow for quick adaptation to the data, while preserving overall balance. 1 RELATED WORK BPTT Werbos (1990) and its truncated counterpart Jaeger (2005) are nearly uncontested in the recurrent learning field. Nevertheless, BPTT is hardly applicable to very long training sequences, as it requires storing and backpropagating through a network with as many layers as there are timesteps Sutskever (2013). Storage issues can be partially addressed as in Gruslys et al. (2016), but at an increased computational cost. Backpropagating through very long sequences also implies performing fewer gradient descent steps, which significantly slows down learning Sutskever (2013). Truncated BPTT heuristically solves BPTT deficiencies by chopping the initial sequence into evenly sized subsequences. Truncated BPTT truncates gradient flows between contiguous subsequences, but maintains the recurrent hidden state of the network. Truncation biases gradients, removing any theoretical convergence guarantee. Intuitively, truncated BPTT has trouble learning dependencies above the range of truncation.
This is an interesting paper. It is well known that TBPTT is biased because of a fixed truncation length. The authors propose to make it unbiased by sampling different truncation lengths and hence changing the optimization procedure which corresponds to adding noise in the gradient estimates which leads to unbiased gradients. Pros: - Its a well written and easy to follow paper. - If I understand correctly, they are changing the optimization procedure so that the proposed approach is able to find a local minima, which was not possible by using truncated backpropagation through time. - Its interesting to see in there PTB results that they get better validation score as compared to truncated BPTT. Cons: - Though the approach is interesting, the results are quite preliminary. And given the fact there results are worse than the LSTM baseline (1.40 v/s 1.38). The authors note that it might be because of they are applying without sub-sequence shuffling. - I'm not convinced of the approach yet. The authors could do some large scale experiments on datasets like Text8 or speech modelling. Few points - If I'm correct that the proposed approach indeed changes the optimization procedure, than I'd like to know what the authors think about exposure bias issue. Its a well known[1, 2] that we can't sample from RNN's for more number of steps, than what we used for trained (difference b/w teacher forcing and free running RNN). I'd like to know how does there method perform in such a regime (where you sample for more number of steps than you have trained for) - Another thing, I'd like to see is the results of this model as compared to truncated backpropagation when you increase the sequence length. For example, Lets say you are doing language modelling on PTB, how the result varies when you change the length of the input sequence. I'd like to see a graph where on X axis is the length of the input sequence and on the Y axis is the bpc score (for PTB) and how does it compare to truncated backpropagation through time. - PTB dataset has still not very long term dependencies, so I'm curious what the authors think about using there method for something like speech modelling or some large scale experiments. - I'd expect the proposed approach to be more computationally expensive as compared to Truncated Back-propagation through time. I dont think the authors mentioned this somewhere in the paper. How much time does a single update takes as compared to Truncated Back-propagation through time ? - Does the proposed approach help in flow of gradients? - In practice for training RNN's people use gradient clipping which also makes the gradient biased. Can the proposed method be used for training longer sequences? [1] Scheduled Sampling For Sequence Prediction with RNN's https://arxiv.org/abs/1506.03099 [2] Professor Forcing https://arxiv.org/abs/1610.09038 Overall, Its an interesting paper which requires some more analysis to be published in this conference. I'd be very happy to increase my score if the authors can provide me results what I have asked for.
iclr_2018_S1J2ZyZ0Z
INTERPRETABLE COUNTING FOR VISUAL QUESTION ANSWERING Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting.
------------------ Summary: ------------------ This work introduces a discrete and interpretable model for answering visually grounded counting questions. The proposed model executes a sequential decision process in which it 1) selects an image region to "add to the count" and then 2) updates the likelihood of selecting other regions based on their relationships (defined broadly) to the selected region. After substantial module pre-trianing, the model is trained end-to-end with the REINFORCE policy gradient method (with the recently proposed self-critical sequence training baseline). Compared to existing approaches for counting (or VQA in general), this approach not only produces lower error but also provides a more human-intuitive discrete, instance-pointing representation of counting. ----------------------- Preliminary Evaluation: ----------------------- The paper presents an interesting approach that seems to outperform existing methods. More importantly in my view, the model treats counting as a discrete human-intuitive process. The presentation and experiments are okay overall but I have a few questions and requests below that I feel would strengthen the submission. ------------------ Strengths: ------------------ - I generally agree with the authors that approaching counting as a region-set selection problem provides an interpretable and human-intuitive methodology that seems more appropriate than attentional or monolithic approaches. - To the best of my knowledge, the writing does a good job of placing the work in the context of existing literature. - The dataset construction is given appropriate attention to restrict its instances to counting questions and will be made available to the public. - The model outperforms existing approaches given the same visual and linguistic inputs / encodings. While I find improvements in RMSE a bit underwhelming, I'm still generally positive about the results given the improved accuracy and human-intuitiveness of the grounded outputs. - I appreciated the analysis of the effect of "commonness" and think it provides interesting insight into the generalization of the proposed model. - Qualitative examples are interesting. ------------------ Weaknesses: ------------------ - There is a lot going on in this paper as far as model construction and training procedures go. In its current state, many of the details are pushed to the supplement such that the main paper would be insufficient for replication. The authors also do not promise code release. - Maybe it is just my unfamiliarity with it, but the caption grounding auxiliary-task feels insufficiently introduced in the main paper. I also find it a bit discouraging that the details of joint training is regulated to the supplementary material, especially given that the UpDown is not using it! I would like to see an ablation of the proposed model without joint training. - Both the IRLC and SoftCount models are trained with objectives that are aware of the ordinal nature of the output space (such that predicting 2 when the answer is 20 is worse than predicting 19). Unfortunately, the UpDown model is trained with cross-entropy and lacks access to this notion. I believe that this difference results in the large gap in RMSE between IRLC/SoftCount and UpDown. Ideally an updated version of UpDown trained under an order-aware loss would be presented during the rebuttal period. Barring that due to time constraints, I would otherwise like to see some analysis to explore this difference, maybe checking to see if UpDown is putting mass in smooth blobs around the predicted answer (though there may be better ways to see if UpDown has captured similar notions of output order as the other models). - I would like to see a couple of simple baselines evaluated on HowMany-QA. Specifically, I think the paper would be stronger if results were put in context with a question only model and a model which just outputs the mean training count. Inter-human agreement would also be interesting to discuss (especially for high counts). - The IRLC model has a significantly large (4x) capacity scoring function than the baseline methods. If this is restricted, do we see significant changes to the results? - This is a relatively mild complaint. This model is more human-intuitive than existing approaches, but when it does make an error by selecting incorrect objects or terminating early, it is no more transparent about the cause of these errors than any other approach. As such, claims about interpretability should be made cautiously. ------------------ Curiosities: ------------------ - In my experience, Visual Genome annotations are often noisy, with many different labels being applied to the same object in different images. For per-image counts, I don't imagine this will be too troubling but was curious if you ran into any challenges. - It looks like both IRLC and UpDown consistently either get the correct count (for small counts) or underestimate. This is not the Gaussian sort of regression error that we might expect from a counting problem. - Could you speak to the sensitivity of the proposed model with respect to different loss weightings? I saw the values used in Section B of the supplement and they seem somewhat specific. ------------------ Minor errors: ------------------ [5.1 end of paragraph 2] 'that accuracy and RSME and not' -> 'that accuracy and RSME are not' [Fig 9 caption] 'The initial scores are lack' -> 'The initial scores lack'
iclr_2018_SJLlmG-AZ
Published as a conference paper at ICLR 2018 UNDERSTANDING IMAGE MOTION WITH GROUP REPRESENTATIONS Motion is an important signal for agents in dynamic environments, but learning to represent motion from unlabeled video is a difficult and underconstrained problem. We propose a model of motion based on elementary group properties of transformations and use it to train a representation of image motion. While most methods of estimating motion are based on pixel-level constraints, we use these group properties to constrain the abstract representation of motion itself. We demonstrate that a deep neural network trained using this method captures motion in both synthetic 2D sequences and real-world sequences of vehicle motion, without requiring any labels. Networks trained to respect these constraints implicitly identify the image characteristic of motion in different sequence types. In the context of vehicle motion, this method extracts information useful for localization, tracking, and odometry. Our results demonstrate that this representation is useful for learning motion in the general setting where explicit labels are difficult to obtain.
The paper presents a method that given a sequence of frames, estimates a corresponding motion embedding to be the hidden state of an RNN (over convolutional features) at the last frame of the sequence. The parameters of the motion embedding are trained to preserve properties of associativity and invertibility of motion, where the frame sequences have been recomposed (from video frames) in various way to create pairs of frame sequences with those -automatically obtained- properties. This means, the motion embedding is essentially trained without any human annotations. Experimentally, the paper shows that in synthetic moving MNIST frame sequences motion embedding discovers different patterns of motion, while it ignores image appearance (i.e., the digit label). The paper also shows that linear regressor trained in KITTI on top of the unsupervised motion embedding to estimate camera motion performs better than chance. Q to the authors: what labelled data were used to train the linear regressor in the KITTI experiment? Empirically, it appears that supervision by preserving group transformations may not be immensely valuable for learning motion representations. Pros 1)The neural architecture for motion embedding computation appears reasonable 2)The paper tackles an interesting problem Cons 1)For a big part of the introduction the paper refers to the problem of `` ````"learning motion” or `''understanding motion” without being specific what it means by that. 2)The empirical results are not convincing of the strength of imposing group transformations for self-supervised learning of motion embeddings. 3)The KITTI experiment is not well explained as it is not clear how this regressor was trained to predict egomotion out of the motion embedding.
iclr_2018_H1Dy---0Z
Published as a conference paper at ICLR 2018 DISTRIBUTED PRIORITIZED EXPERIENCE REPLAY We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time.
A parallel aproach to DQN training is proposed, based on the idea of having multiple actors collecting data in parallel, while a single learner trains the model from experiences sampled from a central replay memory. Experiments on Atari game playing and two MuJoCo continuous control tasks show significant improvements in terms of training time and final performance compared to previous baselines. The core idea is pretty straightforward but the paper does a very good job at demonstrating that it works very well, when implemented efficiently over a large cluster (which is not trivial). I also appreciate the various experiments to analyze the impact of several settings (instead of just reporting a new SOTA). Overall I believe this is definitely a solid contribution that will benefit both practitioners and researchers... as long as they got the computational resources to do so! There are essentially two more things I would have really liked to see in this paper (maybe for future work?): - Using all Rainbow components - Using multiple learners (with actors cycling between them for instance) Sharing your custom Tensorflow implementation of prioritized experience replay would also be a great bonus! Minor points: - Figure 1 does not seem to be referenced in the text - « In principle, Q-learning variants are off-policy methods » => not with multi-step unless you do some kind of correction! I think it is important to mention it even if it works well in practice (just saying « furthermore we are using a multi-step return » is too vague) - When comparing the Gt targets for DQN vs DPG it strikes me that DPG uses the delayed weights phi- to select the action, while DQN uses current weights theta. I am curious to know if there is a good motivation for this and what impact this can have on the training dynamics. - In caption of Fig. 5 25K should be 250K - In appendix A why duplicate memory data instead of just using a smaller memory size? - In appendix D it looks like experiences removed from memory are chosen by sampling instead of just removing the older ones as in DQN. Why use a different scheme? - Why store rewards and gamma’s at each time step in memory instead of just the total discounted reward? - It would have been better to re-use the same colors as in Fig. 2 for plots in the appendix - Would Fig. 10 be more interesting with the full plot and a log scale on the x axis?
iclr_2018_r16Vyf-0-
Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the self-attention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. We propose another extension of self-attention allowing it to efficiently take advantage of the two-dimensional nature of images. While conceptually simple, our generative models trained on two image data sets are competitive with or significantly outperform the current state of the art in autoregressive image generation on two different data sets, CIFAR-10 and ImageNet. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we show that our super-resolution models improve significantly over previously published autoregressive super-resolution models. Images they generate fool human observers three times more often than the previous state of the art. Table 1: Three outputs of a CelebA super-resolution model followed by three image completions by a conditional CIFAR-10 model, with input, model output and the original from left to right
Summary This paper extends self-attention layers (Vaswani et al., 2017) from sequences to images and proposes to use the layers as part of PixelCNNs (van den Oord et al., 2016). The proposed model is evaluated in terms of visual appearance of samples and log-likelihoods. The authors find a small improvement in terms of log-likelihood over PixelCNNs and that super-resolved CelebA images are able to fool human observers significantly more often than PixelRNN based super-resolution (Dahl et al., 2017). Review Autoregressive models are of large interest to the ICLR community and exploring new architectures is a valuable contribution. Using self-attention in autoregressive models is an intriguing idea. It is a little bit disappointing that the added model complexity only yields a small improvement compared to the more straight-forward modifications of the PixelCNN++. I think the paper would benefit from a little bit more work, but I am open to adjusting my score based on feedback. I find it somewhat surprising that the proposed model is only slightly better in terms of log-likelihood than a PixelRNN, but much better in terms of human evaluation – given that both models were optimized for log-likelihood. Was the setup used with Mechanical Turk exactly the same as the one used by Dahl et al.? These types of human evaluations can be extremely sensitive to changes in the setup, even the phrasing of the task can influence results. E.g., presenting images scaled differently can mask certain artifacts. In addition, the variance between subjects can be very high. Ideally, each method included in the comparison would be re-evaluated using the same set of observers. Please include error bars. The CelebA super-resolution task furthermore seems fairly limited. Given the extreme downsampling of the input, the task becomes similar to simply generating any realistic image. A useful baseline would be the following method: Store the entire training set. For a given query image, look for the nearest neighbor in the downsampled space, then return the corresponding high-resolution image. This trivial method might not only perform well, it also highlights a flaw in the evaluation: Any method which returns stored high-resolution images – even if they don’t match the input – would perform at 50%. To fix this, the human observers should also receive the low-resolution image and be asked to identify the correct corresponding high-resolution image. Using multiplicative operations to model images seems important. How does the self-attention mechanism relate to “gated” convolutions used in PixelCNNs? Could gated convolutions not also be considered a form of self-attention? The presentation/text could use some work. Much of the text assumes that the reader is familiar with Vaswani et al. (2017) but could easily be made more self-contained by directly including the definitions used. E.g., the encoding of positions using sines and cosines or the multi-head attention model. I also felt too much of the architecture is described in prose and could be more efficiently and precisely conveyed in equations. On page 7 the authors write “we believe our cherry-picked images for various classes to be of higher perceptual quality”. This is a meaningless result, not only because the images were cherry-picked. Generating realistic images is trivial - you just need to store the training images. Analyzing samples generated by a generative model (outside the context of an application) should therefore only be used for diagnostic purposes or to build intuitions but not to judge the quality of a model. Please consider rephrasing the last sentence of the abstract. Generating images which “look pretty cool” should not be the goal of a serious machine learning paper or a respected machine learning conference.
iclr_2018_Sk7KsfW0-
Published as a conference paper at ICLR 2018 LIFELONG LEARNING WITH DYNAMICALLY EXPANDABLE NETWORKS We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them. We validate DEN on multiple public datasets under lifelong learning scenarios, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch counterparts with substantially fewer number of parameters. Further, the obtained network fine-tuned on all tasks obtained siginficantly better performance over the batch models, which shows that it can be used to estimate the optimal network structure even when all tasks are available in the first place.
The paper was clearly written and pleasant to read. I liked the use of sparsity- and group-sparsity-promoting regularizers to select connections and decide how to expand the network. A strength of the paper is that the proposed algorithm is interesting and intuitive, even if relatively complex, as it requires chaining a sequence of sub-algorithms. It was good to see the impact of each sub-algorithm studied separately (to some degree) in the experimental section. The results are overall strong. It’s hard for me to judge the novelty of the approach though, as I’m not an expert on this topic. Just a few points below: - The experiments focus on a relevant continual learning problem, where each new task corresponds to learning a new class. In this setup, the method consistently outperforms EWC (e.g., Fig. 3), as well as the progressive network baseline. Did the authors also check the performance on the permuted MNIST benchmark, as studied by Kirkpatrick et al. and Zenke et al.? It would be important to see how the method fares in this setting, where the tasks are the same, but the inputs have to be remapped, and network expansion is less of an issue. - Fig. 4 would be clearer if the authors showed also the performance and how much the selected connection subsets would change if instead of using the last layer lasso + BFS, the full L1-penalized problem was solved, while keeping the rest of the pipeline intact. - Still regarding the proposed selective retraining, the special role played by the last hidden layer seems slightly arbitrary. It may well be that it has the highest task-specificity, though this is not trivial to me. This special role might become problematic when dealing with deeper networks.
iclr_2018_HkXWCMbRW
Published as a conference paper at ICLR 2018 TOWARDS IMAGE UNDERSTANDING FROM DEEP COMPRESSION WITHOUT DECODING Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods. Since the encoders and decoders in DNN-based compression methods are neural networks with feature-maps as internal representations of the images, we directly integrate these with architectures for image understanding. This bypasses decoding of the compressed representation into RGB space and reduces computational cost. Our study shows that accuracies comparable to networks that operate on compressed RGB images can be achieved while reducing the computational complexity up to 2×. Furthermore, we show that synergies are obtained by jointly training compression networks with classification networks on the compressed representations, improving image quality, classification accuracy, and segmentation performance. We find that inference from compressed representations is particularly advantageous compared to inference from compressed RGB images for aggressive compression rates.
Thanks for addressing most of the issues. I changed my given score from 3 to 6. Summary: This work explores the use of learned compressed image representation for solving 2 computer vision tasks without employing a decoding step. The paper claims to be more computationally and memory efficient compared to the use of original or the decompressed images. Results are presented on 2 datasets "Imagenet" and "PASCAL VOC 2012". They also jointly train the compression and classification together and empirically shows it can improve both classification and compression together. Pros: + The idea of learning from a compressed representation is a very interesting and beneficial idea for large-scale image understanding tasks. Cons: - The paper is too long (13 pages + 2 pages of references). The suggested standard number of pages is 8 pages + 1 page of references. There are many parts that are unnecessary in the paper and can be summarized. Summarizing and rewording them makes the paper more consistent and easier to read: ( 1. A very long introduction about the benefits of inferring from the compressed images and examples. 2. A large part of the intro and Related work can get merged. 3. Experimental setup part is long but not well-explained and is not self-contained particularly for the evaluation metrics. “Please briefly explain what MS-SSIM, SSIM, and PSNR stand for”. There is a reference to the Agustsson et al 2017 paper “scalar quantization”, which is not well explained in the paper. It is better to remove this part if it is not an important part or just briefly but clearly explain it. 4. Fig. 4 is not necessary. 4.3 contains extra information and could be summarized in a more consistent way. 5. Hyperparameters that are applied can be summarized in a small table or just explain the difference between the architectures that are used.) - There are parts of the papers which are confusing or not well-written. It is better to keep the sentences short and consistent: E.g: subsection 3.2, page 5: “To adapt the ResNet … where k is the number of … layers of the network” can be changed to 3 shorter sentences, which is easier to follow. There are some typos: e.g: part 3.1, fever ---> fewer, - As it is mentioned in the paper, solving a Vision problem directly from a compressed image, is not a novel method (e.g: DCT coefficients were used for both vision and audio data to solve a task without any decompression). However, applying a deep representation for the compression and then directly solving a vision task (classification and segmentation) can be considered as a novel idea. - In the last part of the paper, both compression and classification parts are jointly trained, and it is empirically presented that both results improved by jointly training them. However, to me, it is not clear if the trained compression model on this specific dataset and for the task of classification can work well for other datasets or other tasks. The experimental setup and the figures are not well explained and well written.
iclr_2018_rytstxWAW
Published as a conference paper at ICLR 2018 FASTGCN: FAST LEARNING WITH GRAPH CONVOLU- TIONAL NETWORKS VIA IMPORTANCE SAMPLING The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. This model, however, was originally designed to be learned with the presence of both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work-FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.
Update: I have read the rebuttal and the revised manuscript. Additionally I had a brief discussion with the authors regarding some aspects of their probabilistic framework. I think that batch training of GCN is an important problem and authors have proposed an interesting solution to this problem. I appreciated all the work authors put into the revision. In this regard, I have updated my rating. However, I am not satisfied with how the probabilistic problem formulation was presented in the paper. I would appreciate if authors were more upfront about the challenges of the problem they formulated and limitations of their results. I briefly summarize the key missing points below, although I acknowledge that solution to such questions is out of scope of this work. 1. Sampling of graph nodes from P is not iid. Every subsequent node can not be equal to any of the previous nodes. Hence, the distribution changes and subsequent nodes are dependent on previous ones. However, exchangeability could be a reasonable assumption to make as order (in the joint distribution) does not matter for simple choices of P. Example: let V be {1,2,3} and P a uniform distribution. First node can be any of the {1,2,3}, second node given first (suppose first node is '2') is restricted to {1,3}. There is clearly a dependency and change of distribution. 2. Theorem 1 is proven under the assumption that it is possible to sample from P and utilize Monte Carlo type argument. However, in practice, sampling is done from a uniform distribution over observed samples. Also, authors suggest that V may be infinite. Recall that for Monte Carlo type approaches to work, sampling distribution is ought to contain support of the true distribution. Observed samples (even as sample size goes to infinity) will never be able to cover an infinite V. Hence, Theorem 1 will never be applicable (for the purposes of evaluating population loss). Also note that this is different from a more classical case of continuous distributions, where sampling from a Gaussian, for instance, will cover any domain of true distribution. In the probabilistic framework defined by the authors it is impossible to cover domain of P, unless whole V is observed. ---------------------------------------------------------------------- This work addresses a major shortcoming of recently popularized GCN. That is, when the data is equipped with the graph structure, classic SGD based methods are not straightforward to apply. Hence it is not clear how to deal with large datasets (e.g., Reddit). Proposed approach uses an adjacency based importance sampling distribution to select only a subset of nodes on each GCN layer. Resulting loss estimate is shown to be consistent and its gradient is used to perform the weight updates. Proposed approach is interesting and the direction of the work is important given recent popularity of the GCN. Nonetheless I have two major question and would be happy to revisit my score if at least one is addressed. Theory: SGD requires an unbiased estimate of the gradient to converge to the global optima in the convex loss case. Here, the loss estimate is shown to be consistent, but not guaranteed to be unbiased and nothing is said about the gradient in Algorithm 1. Could you please provide some intuition about the gradient estimate? I might not be familiar with some relevant results, but it appears to me that Algorithm 1 will not converge to the same solution as full data GD would. Practice: Per batch timings in Fig. 3 are not enough to argue that the method is faster as it might have poor convergence properties overall. Could you please show the train/test accuracies against training time for all compared methods? Some other concerns and questions: - It is not quite cleat what P is. You defined it as distribution over vertices of some (potentially infinite) population graph. Later on, sampling from P becomes equivalent to uniform sampling over the observed nodes. I don't see how you can define P over anything outside of the training nodes (without defining loss on the unobserved data), as then you would be sampling from a distribution with 0 mass on the parts of the support of P, and this would break the Monte Carlo assumptions. - Weights disappeared in the majority of the analysis. Could you please make the representation more consistent. - a(v,u) in Eq. 2 and A(v,u) in Eq. 5 are not defined. Do they both correspond to entries of the (normalized) adjacency?
iclr_2018_SkFqf0lAZ
Published as a conference paper at ICLR 2018 MEMORY ARCHITECTURES IN RECURRENT NEURAL NETWORK LANGUAGE MODELS We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov, 2015;Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language.
The authors propose to compare three different memory architecture for recurrent neural network language models: vanilla LSTM, random access based on attention and continuous stack. The second main contribution of the paper is to propose an extension of continuous stacks, which allows to perform multiple pop operations at a single time step. The way to do that is to use a similar mechanism as the adaptive computation time from Graves (2016): all the pop operations are performed, and the final state of the continuous stack is weighted average of all the intermediate states. The different memory models are evaluated on two standard language modeling tasks: PTB and WikiText-2, as well as on the verb number prediction dataset from Linzen et al (2016). On the language modeling tasks, the stack model performs slightly better than the attention models (0-2 ppl points) which performs slightly better than the plain LSTM (2-3 ppl). On the verb number prediction tasks, the stack model tends to outperforms the two other models (which get similar results) for hard examples (2 or more attractors). Overall, I enjoy reading this paper: it is clearly written, and contains interesting analysis of different memory architecture for recurrent neural networks. As far as I know, it is the first thorough comparison of the different memory architecture for recurrent neural network applied to language modeling. The experiments on the Linzen et al. (2016) dataset is also interesting, as it shows that for hard examples, the different models do have different behavior (even when the difference are not noticeable on the whole test set). One small negative aspect of the paper is that the substance might be a bit limited. The only technical contribution is to merge the ideas from the continuous stack with the adaptive computation time to obtain the "multi-pop" model. In the experimental section, which I believe is the main contribution of the paper, I would have liked to see more "in-depth" analysis of the different models. I found the experiments performed on the Linzen et al. (2016) dataset (Table 2) to be quite interesting, and would have liked more analysis like that. On the other hand, I found Figures 2 or 3 not very informative, as it is (would like to see more). For example, from Fig. 2, it would be interesting to get a better understanding of what errors are made by the different models (instead of just the distribution). Finally, I have a few questions for the authors: - In Figure 1. shouldn't there be an arrow from h_{t-1} to m_t instead of x_{t-1} to m_t? - What are the equations to update the stack? I assume something similar to Joulin & Mikolov (2015)? - Do you have any ideas why there is a sharp jump between 4 and 5 attractors (Table 2)? - Why no "pop" operations in Figure 3 and 4? pros/cons: + clear and easy to read + interesting analysis - not very original Overall, while not groundbreaking, this is a serious paper with interesting analysis. Hence, I am weakly recommending to accept this paper.
iclr_2018_HypkN9yRW
We present a generic dynamic architecture that employs a problem specific differentiable forking mechanism to leverage discrete logical information about the problem data structure. We adapt and apply our model to CLEVR Visual Question Answering, giving rise to the DDRprog architecture; compared to previous approaches, our model achieves higher accuracy in half as many epochs with five times fewer learnable parameters. Our model directly models underlying question logic using a recurrent controller that jointly predicts and executes functional neural modules; it explicitly forks subprocesses to handle logical branching. While FiLM and other competitive models are static architectures with less supervision, we argue that inclusion of program labels enables learning of higher level logical operations -our architecture achieves particularly high performance on questions requiring counting and integer comparison. We further demonstrate the generality of our approach though DDRstack -an application of our method to reverse Polish notation expression evaluation in which the inclusion of a stack assumption allows our approach to generalize to long expressions, significantly outperforming an LSTM with ten times as many learnable parameters.
Summary: The paper presents a generic dynamic architecture for CLEVR VQA and Reverse Polish notation problems. Experiments on CLEVR show that the proposed model DDRprog outperforms existing models, but it requires explicit program supervision. The proposed architecture for RPN, called DDRstack outperforms an LSTM baseline. Strengths: — For CLEVR VQA task, the proposed model outperforms the state-of-the-art with significantly less number of parameters. — For RPN task, the proposed model outperforms baseline LSTM model by a large margin. Weaknesses: — The paper doesn’t describe the model clearly. After reading the paper, it’s not clear to me what the components of the model are, what each of them take as input and produce as output, what these modules do and how they are combined. I would recommend to restructure the paper to clearly mention each of the components, describe them individually and then explain how they are combined for both cases - DDRprog and DDRstack. — Is the “fork” module the main contribution of the paper? If so, at least this should be described in detail. So, if no fork module is required for a question, the model architecture is effectively same as IEP? — Machine accuracy is already par with human accuracy on CLEVR and very close to 100%. Why is this problem still important? — Given that the performance of state-on-art on CLEVR dataset is already very high ( <5% error) and the performance numbers of the proposed model are not very far from the previous models, it is very important to report the variance in accuracies along with the mean accuracies to determine if the performance of the proposed model is statistically significantly better than the previous models or not. — In Figure 4, why are the LSTM32/128 curves different for Length 10 and Length 30 till subproblem index 10? They are both trained on the same training data, only test data is of different length and ideally both models should achieve similar accuracy for the first 10 subproblems (same trend as DDRstack). — Why is DDRstack not compared to StackRNN? — Can the authors provide training time comparison of their model and other/baseline models? Because that is more important than the number of epochs required in training. — There are only 3 curves (instead of 4) in Figure 3. — In a number of places, the authors are referring to left and right program branches. What are they? These names have not being defined formally in the paper. Overall: I think the research work in the paper is interesting and significant, but given the current presentation and level of detail in the paper, I don’t think it will be helpful for the research community. By proper restructuring of paper and adding more details, the paper can be converted to a solid submission.
iclr_2018_HyHmGyZCZ
Distributional Semantics Models(DSM) derive word space from linguistic items in context. Meaning is obtained by defining a distance measure between vectors corresponding to lexical entities. Such vectors present several problems. This work concentrates on quality of word embeddings, improvement of word embedding vectors, applicability of a novel similarity metric used 'on top' of the word embeddings. In this paper we provide comparison between two methods for post process improvements to the baseline DSM vectors. The counter-fitting method which enforces antonymy and synonymy constraints into the Paragram vector space representations recently showed improvement in the vectors' capability for judging semantic similarity. The second method is our novel RESM method applied to GloVe baseline vectors. By applying the hubness reduction method, implementing relational knowledge into the model by retrofitting synonyms and providing a new ranking similarity definition RESM that gives maximum weight to the top vector component values we equal the results for the ESL and TOEFL sets in comparison with our calculations using the Paragram and Paragram + Counter-fitting methods. For SIMLEX-999 gold standard since we cannot use the RESM the results using GloVe and PPDB are significantly worse compared to Paragram. Apparently, counter-fitting corrects hubness. The Paragram or our cosine retrofitting method are state-of-the-art results for the SIMLEX-999 gold standard. They are 0.2 better for SIMLEX-999 than word2vec with sense de-conflation (that was announced to be state-of the-art method for less reliable gold standards). Apparently relational knowledge and counter-fitting is more important for judging semantic similarity than sense determination for words. It is to be mentioned, though that Paragram hyperparameters are fitted to SIMLEX-999 results. The lesson is that many corrections to word embeddings are necessary and methods with more parameters and hyperparameters perform better.
This paper proposes a ranking-based similarity metric for distributional semantic models. The main idea is to learn "baseline" word embeddings, retrofitting those and applying localized centering, to then calculate similarity using a measure called "Ranking-based Exponential Similarity Measure" (RESM), which is based on the recently proposed APSyn measure. I think the work has several important issues: 1. The work is very light on references. There is a lot of previous work on evaluating similarity in word embeddings (e.g. Hill et al, a lot of the papers in RepEval workshops, etc.); specialization for similarity of word embeddings (e.g. Kiela et al., Mrksic et al., and many others); multi-sense embeddings (e.g. from Navigli's group); and the hubness problem (e.g. Dinu et al.). For the localized centering approach, Hara et al.'s introduced that method. None of this work is cited, which I find inexcusable.
 2. The evaluation is limited, in that the standard evaluations (e.g. SimLex would be a good one to add, as well as many others, please refer to the literature) are not used and there is no comparison to previous work. The results are also presented in a confusing way, with the current state of the art results separate from the main results of the paper. It is unclear what exactly helps, in which case, and why.
 3. There are technical issues with what is presented, with some seemingly factual errors. For example, "In this case we could apply the inversion, however it is much more convinient [sic] to take the negative of distance. Number 1 in the equation stands for the normalizing, hence the similarity is defined as follows" - the 1 does not stand for normalizing, that is the way to invert the cosine distance (put differently, cosine distance is 1-cosine similarity, which is a metric in Euclidean space due to the properties of the dot product). Another example, "are obtained using the GloVe vector, not using PPMI" - there are close relationships between what GloVe learns and PPMI, which the authors seem unaware of (see e.g. the GloVe paper and Omer Levy's work).
 4. Then there is the additional question, why should we care? The paper does not really motivate why it is important to score well on these tests: these kinds of tests are often used as ways to measure the quality of word embeddings, but in this case the main contribution is the similarity metric used *on top* of the word embeddings. In other words, what is supposed to be the take-away, and why should we care? As such, I do not recommend it for acceptance - it needs significant work before it can be accepted at a conference. Minor points: - Typo in Eq 10 - Typo on page 6 (/cite instead of \cite)
iclr_2018_SkBYYyZRZ
Searching for Activation Functions The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f (x) = x · sigmoid(βx), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.
Authors propose a reinforcement learning based approach for finding a non-linearity by searching through combinations from a set of unary and binary operators. The best one found is termed Swish unit; x * sigmoid(b*x). The properties of Swish like allowing information flow on the negative side and linear nature on the positive have been proven to be important for better optimization in the past by other functions like LReLU, PLReLU etc. As pointed out by the authors themselves for b=1 Swish is equivalent to SiL proposed in Elfwing et. al. (2017). In terms of experimental validation, in most cases the increase is performance when using Swish as compared to other models are very small fractions. Again, the authors do state that "our results may not be directly comparable to the results in the corresponding works due to differences in our training steps." Based on the Figure 6 authors claim that the non-monotonic bump of Swish on the negative side is very important aspect. More explanation is required on why is it important and how does it help optimization. Distribution of learned b in Swish for different layers of a network can interesting to observe.
iclr_2018_r1Oen--RW
Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step -adding a mean shift to the input data-to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution.
Saliency methods are effective tools for interpreting the computation performed by DNNs, but evaluating the quality of interpretations given by saliency methods are often largely heuristic. Previous work has tried to address this shortcoming by proposing that saliency methods should satisfy "implementation invariance", which says that models that compute the same function should be assigned the same interpretation. This paper builds on this work by proposing and studying "input invariance", a specific kind of implementation invariance between two DNNs that compute identical functions but where the input is preprocessed in different ways. Then, they examine whether a number of existing saliency methods satisfy this property. The property of "implementation invariance" proposed in prior work seems poorly motivated, since the entire point of interpretations is that they should explain the computation performed by a specific network. Even if two DNNs compute the same function, they may do so using very different computations, in which case it seems natural that their interpretations should be different. Nevertheless, I can believe that the narrower property of input invariance should hold for saliency methods. A much more important concern I have is that the proposed input invariance property is not well motivated. A standard preprocessing step for DNNs is to normalize the training data, for example, by subtracting the mean and dividing by the standard deviation. Similarly, for image data, pixel values are typically normalized to [0,1]. Assuming inputs are transformed in this way, the input invariance property (for mean shift) is always trivially satisfied. The paper does not justify why we should consider networks where the training data is not normalized is such a way. Even if the input is not normalized, the failures they find in existing saliency methods are typically rather trivial. For example, for the gradient times input method, they are simply noting that the interpretation is translated by the gradient times the mean shift. The paper does not discuss why this shift matters. It is not at all clear to me that the quality of the interpretation is adversely affected by these shifts. I believe the notion that saliency methods should be invariant to input transformations may be promising, but more interesting transformations must be considered -- as far as I can tell, the property of invariance to linear transformations to the input does not provide any interesting insight into the correctness of saliency methods.
iclr_2018_SySisz-CW
Generative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but reverse, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures.
This paper examines the nature of convolutional filters in the encoder and a decoder of a VAE, and a generator and a discriminator of a GAN. The authors treat the inputs (X) and outputs (Y) of each filter throughout each step of the convolving process as a time series, which allows them to do a Discrete Time Fourier Transform analysis of the resulting sequences. By comparing the power spectral density of the input and the output, they get a Spectral Dependency Ratio (SDR) ratio that characterises a filter as spectrally independent (neutral), correlating (amplifies certain frequencies), or anti-correlating (dampens frequencies). This analysis is performed in the context of the Independence of Cause and Mechanism (ICM) framework. The authors claim that their analysis demonstrates a different characterisation of the inference/discriminator and generative networks in VAE and GAN, whereby the former are anti-causal and the latter are causal in line with the ICM framework. They also claim that this analysis can be used to improve the performance of the models. Pros: -- SDR characterisation of the convolutional filters is interesting -- The authors show that filters with different characteristics are responsible for different aspects of image modelling Cons: -- The authors do not actually demonstrate how their analysis can be used to improve VAEs or GANs -- Their proposed SDR analysis does not actually find much difference between the generator and the discriminator of the GAN -- The clarity of the writing could be improved (e.g. the discussion in section 3.1 seems inaccurate in the current form). Grammatical and spelling mistake are frequent. More background information could be helpful in section 2.2. All figures (but in particular Figure 3) need more informative captions -- The authors talk a lot about disentangling in the introduction, but this does not seem to be followed up in the rest of the text. Furthermore, they are missing a reference to beta-VAE (Higgins et al, 2017) when discussing VAE-based approaches to disentangled factor learning In summary, the paper is not ready for publication in its current form. The authors are advised to use the insights from their proposed SDR analysis to demonstrate quantifiable improvements the VAEs/GANs.
iclr_2018_H1zriGeCZ
HYPERPARAMETER OPTIMIZATION: A SPECTRAL APPROACH We give a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters. The algorithm -an iterative application of compressed sensing techniques for orthogonal polynomials -requires only uniform sampling of the hyperparameters and is thus easily parallelizable. Experiments for training deep neural networks on Cifar-10 show that compared to state-of-the-art tools (e.g., Hyperband and Spearmint), our algorithm finds significantly improved solutions, in some cases better than what is attainable by handtuning. In terms of overall running time (i.e., time required to sample various settings of hyperparameters plus additional computation time), we are at least an order of magnitude faster than Hyperband and Bayesian Optimization. We also outperform Random Search 8×. Our method is inspired by provably-efficient algorithms for learning decision trees using the discrete Fourier transform. We obtain improved sample-complexty bounds for learning decision trees while matching state-of-the-art bounds on running time (polynomial and quasipolynomial, respectively).
The paper is about hyperparameter optimization, which is an important problem in deep learning due to the large number of hyperparameters in contemporary model architectures and optimization algorithms. At a high-level, hyperparameter optimization (for the challenging case of discrete variables) can be seen as a black-box optimization problem where we have only access to a function evaluation oracle (but no gradients etc.). In the entirely unstructured case, there are strong lower bounds with an exponential dependence on the number of hyperparameters. In order to sidestep these impossibility results, the current paper assumes structure in the unknown function mapping hyperparameters to classification accuracy. In particular, the authors assume that the function admits a representation as a sparse and low-degree polynomial. While the authors do not empirically validate whether this is a good model of the unknown function, it appears to be a reasonable assumption (the authors *do* empirically validate their overall approach). Based on the sparse and low-degree assumption, the paper introduces a new algorithm (called Harmonica) for hyperparameter optimization. The main idea is to leverage results from compressed sensing in order to recover the sparse and low-degree function from a small number of measurements (i.e., function evaluations). The authors derive relevant sample complexity results for their approach. Moreover, the method also yields new algorithms for learning decision trees. In addition to the theoretical results , the authors conduct a detailed study of their algorithm on CIFAR10. They compare to relevant recent work in hyperparameter optimization (Bayesian optimization, random search, bandit algorithms) and find that their method significantly improves over prior work. The best parameters found by Harmonica improve over the hand-tuned results for their "base architecture" (ResNets). Overall, I find the main idea of the paper very interesting and well executed, both on the theoretical and empirical side. Hence I strongly recommend accepting this paper. Small comments and questions: 1. It would be interesting to see how close the hyperparameter function is to a low-degree and sparse polynomial (e.g., MSE of the best fit). 2. A comparison without dummy parameters would be interesting to investigate the performance differences between the algorithms in a lower-dimensional problem. 3. The current paper does not mention the related work on hyperparameter optimization using reinforcement learning techniques (e.g., Zoph & Le, ICLR 2017). While it might be hard to compare to this approach directly in experiments, it would still be good to mention this work and discuss how it relates to the current paper. 4. Did the authors tune the hyperparameters directly using the CIFAR10 test accuracy? Would it make sense to use a slightly smaller training set and to hold out say 5k images for hyperparameter evaluation before making the final accuracy evaluation on the test set? The current approach could be prone to overfitting. 5. While random search does not explicitly exploit any structure in the unknown function, it can still implicitly utilize smoothness or other benign properties of the hyperparameter space. It might be worth adding this in the discussion of the related work. 6. Algorithm 1: Why is the argmin for g_i (what does the index i refer to)? 7. Why does PSR truncate the indices in alpha? At least in "standard" compressed sensing, the Lasso also has recovery guarantees without truncation (and empirically works sometimes better without). 9. Definition 3: Should C be a class of functions mapping {-1, 1}^n to R? (Note the superscript.) 10. On Page 3 we assume that K = 1, but Theorem 6 still maintains a dependence on K. It might be cleaner to either treat the general K case throughout, or state the theorem for K = 1. 11. On CIFAR10, the best hyperparameters do not improve over the state of the art with other models (e.g., a wide ResNet). It could be interesting to run Harmonica in the regime where it might improve over the best known models for CIFAR10. 12. Similarly, it would be interesting to see whether the hyperparameters identified by Harmonica carry over to give better performance on ImageNet. The authors claim in C.3 that the hyperparameters identified by Harmonica generalize from small networks to large networks. Testing whether the hyperparameters also generalize from a smaller to a larger dataset would be relevant as well.
iclr_2018_Hyig0zb0Z
In this paper we introduce a new speech recognition system, leveraging a simple letter-based ConvNet acoustic model. The acoustic model requires only audio transcription for training -no alignment annotations, nor any forced alignment step is needed. At inference, our decoder takes only a word list and a language model, and is fed with letter scores from the acoustic model -no phonetic word lexicon is needed. Key ingredients for the acoustic model are Gated Linear Units and high dropout. We show near state-of-the-art results in word error rate on the LibriSpeech corpus (Panayotov et al., 2015) with MFSC features, both on the CLEAN and OTHER configurations.
This paper applies gated convolutional neural networks [1] to speech recognition, using the training criterion ASG [2]. It is fair to say that this paper contains almost no novelty. This paper starts by bashing the complexity of conventional HMM systems, and states the benefits of their approach. However, all of the other grapheme-based end-to-end systems enjoy the same benefit as CTC and ASG. Prior work along this line includes [3, 4, 5, 6, 7]. Using MFSC, or more commonly known as log mel filter bank outputs, has been pretty common since [8]. Having a separate subsection (2.1) discussing this seems unnecessary. Arguments in section 2.3 are weak because, again, all other grapheme-based end-to-end systems have the same benefit as CTC and ASG. It is unclear why discriminative training, such as MMI, sMBR, and lattice-free MMI, is mentioned in section 2.3. Discriminative training is not invented to overcome the lack of manual segmentations, and is equally applicable to the case where we have manual segmentations. The authors argue that ASG is better than CTC in section 2.3.1 because it does not use the blank symbol and can be faster during decoding. However, once the transition scores are introduced in ASG, the search space becomes quadratic in the number of characters, while CTC is still linear in the number characters. In addition, ASG requires additional forward-backward computation for computing the partition function (second term in eq 3). There is no reason to believe that ASG can be faster than CTC in both training and decoding. The connection between ASG, CTC, and marginal log loss has been addressed in [9], and it does make sense to train ASG with the partition function. Otherwise, the objective won't be a proper probability distribution. The citation style in section 2.4 seems off. Also see [4] for a great description of how beam search is done in CTC. Details about training, such as the optimizer, step size, and batch size, are missing. Does no batching (in section 3.2) means a batch size of one utterance? In the last paragraph of section 3.2, why is there a huge difference in real-time factors between the clean and other set? Something is wrong unless the authors are using different beam widths in the two settings. The paper can be significantly improved if the authors compare the performance and decoding speed against CTC with the same gated convnet. It would be even better to compare CTC and ASG to seq2seq-based models with the same gated convnet. Similar experiments should be conducted on switchboard and wsj because librespeech is several times larger than switchboard and wsj. None of the comparison in table 4 is really meaningful, because none of the other systems have parameters as many as 19 layers of convolution. Why does CTC fail when trained without the blanks? Is there a way to fix it besides using ASG? It is also unclear why speaker-adaptive training is not needed. At which layer do the features become speaker invariant? Can the system improve further if speaker-adaptive features are used instead of log mels? This paper would be much stronger if the authors can include these experiments and analyses. [1] R Collobert, C Puhrsch, G Synnaeve, Wav2letter: an end-to-end convnet-based speech recognition system, 2016 [2] Y Dauphin, A Fan, M Auli, D Grangier, Language modeling with gated convolutional nets, 2017 [3] A Graves and N Jaitly, Towards End-to-End Speech Recognition with Recurrent Neural Networks, 2014 [4] A Maas, Z Xie, D Jurafsky, A Ng, Lexicon-Free Conversational Speech Recognition with Neural Networks, 2015 [5] Y Miao, M Gowayyed, F Metze, EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding, 2015 [6] D Bahdanau, J Chorowski, D Serdyuk, P Brakel, Y Bengio, End-to-end attention-based large vocabulary speech recognition, 2016 [7] W Chan, N Jaitly, Q Le, O Vinyals, Listen, attend and spell, 2015 [8] A Graves, A Mohamed, G Hinton, Speech recognition with deep recurrent neural networks, 2013 [9] H Tang, L Lu, L Kong, K Gimpel, K Livescu, C Dyer, N Smith, S Renals, End-to-End Neural Segmental Models for Speech Recognition, 2017
iclr_2018_H1Yp-j1Cb
AN ONLINE LEARNING APPROACH TO GENERATIVE ADVERSARIAL NETWORKS We consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named CHEKHOV GAN 1 . On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one-layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures. On several real-world tasks our approach exhibits improved stability and performance compared to standard GAN training.
The paper applies tools from online learning to GANs. In the case of a shallow discriminator, the authors proved some results on the convergence of their proposed algorithm (an adaptation of FTRL) in GAN games, by leveraging the fact that when D update is small, the problem setup meets the ideal conditions for no-regret algorithms. The paper then takes the intuition from the semi-shallow case and propose a heuristic training procedure for deep GAN game. Overall the paper is very well written. The theory is significant to the GAN literature, probably less so to the online learning community. In practice, with deep D, trained by single gradient update steps for G and D, instead of the "argmin" in Algo 1., the assumptions of the theory break. This is OK as long as sufficient experiment results verify that the intuitions suggested by the theory still qualitatively hold true. However, this is where I have issues with the work: 1) In all quantitative results, Chekhov GAN do not significantly beat unrolled GAN. Unrolled GAN looks at historical D's through unrolled optimization, but not the history of G. So this lack of significant difference in results raise the question of whether any improvement of Chekhov GAN is coming from the online learning perspective for D and G, or simply due to the fact that it considers historical D models (which could be motivated by sth other than the online learning theory). 2) The mixture GAN approach suggested in Arora et al. (2017) is very related to this work, as acknowledged in Sec. 2.1, but no in-depth analysis is carried out. I suggest the authors to either discuss why Chekhov GAN is obviously superior and hence no experiments are needed, or compare them experimentally. 3) In the current state, it is hard to place the quantitative results in context with other common methods in the recent literature such as WGAN with gradient penalty. I suggest the authors to either report some results in terms of inception scores on cifar10 with similar architectures used in other methods for comparison. Alternatively please show WGAN-GP and/or other method results in at least one or two experiments using the evaluation methods in the paper. In summary, almost all the experiments in the paper are trying to establish improvement over basic GAN, which would be OK if the gap between theory and practice is small. But in this case, it is not. So it is not entirely convincing that the practical Algo 2 works better for the reason suggested by the theory, nor it drastically improves practical results that it could become the standard technique in the literature.
iclr_2018_r1RQdCg0W
We present Merged-Averaged Classifiers via Hashing (MACH) for Kclassification with large K. Compared to traditional one-vs-all classifiers that require O(Kd) memory and inference cost, MACH only need O(d log K) memory while only requiring O(K log K +d log K) operation for inference. MACH is the first generic K-classification algorithm, with provably theoretical guarantees, which requires O(log K) memory without any assumption on the relationship between classes. MACH uses universal hashing to reduce classification with a large number of classes to few independent classification task with small (constant) number of classes. We provide theoretical quantification of accuracy-memory tradeoff by showing the first connection between extreme classification and heavy hitters. With MACH we can train ODP dataset with 100,000 classes and 400,000 features on a single Titan X GPU (12GB), with the classification accuracy of 19.28%, which is the best-reported accuracy on this dataset. Before this work, the best performing baseline is a one-vs-all classifier that requires 40 billion parameters (160 GB model size) and achieves 9% accuracy. In contrast, MACH can achieve 9% accuracy with 480x reduction in the model size (of mere 0.3GB). With MACH, we also demonstrate complete training of fine-grained imagenet dataset (compressed size 104GB), with 21,000 classes, on a single GPU.
Thanks to the authors for their feedback. ============================== The paper presents a method for classification scheme for problems involving large number of classes in multi-class setting. This is related to the theme of extreme classification but the setting is restricted to that of multi-class classification instead of multi-label classification. The training process involves data transformation using R hash functions, and then learning R classifiers. During prediction the probability of a test instance belonging to a class is given by the sum of the probabilities assigned by the R meta-classifiers to the meta-class in the which the given class label falls. The paper demonstrates better results on ODP and Imagenet-21K datasets compared to LOMTree, RecallTree and OAA. There are following concerns regarding the paper which don't seem to be adequately addressed : - The paper seems to propose a method in which two-step trees are being constructed based on random binning of labels, such that the first level has B nodes. It is not intuitively clear why such a method could be better in terms of prediction accuracy than OAA. The authors mention algorithms for training and prediction, and go on to mention that the method performs better than OAA. Also, please refer to point 2 below. - The paper repeatedly mentions that OAA has O(Kd) storage and prediction complexity. This is however not entirely true due to sparsity of training data, and the model. These statements seem quite misleading especially in the context of text datasets such as ODP. The authors are requested to check the papers [1] and [2], in which it is shown that OAA can perform surprisingly well. Also, exploiting the sparsity in the data/models, actual model sizes for WikiLSHTC-325K from [3] can be reduced from around 900GB to less than 10GB with weight pruning, and sparsity inducing regularizers. It is not clear if the 160GB model size reported for ODP took the above suggestions into considerations, and which kind of regularization was used. Was the solver used from vowpal wabbit or packages such as Liblinear were used for reporting OAA results. - Lack of empirical comparison - The paper lacks empirical comparisons especially on large-scale multi-class LSHTC-1/2/3 datasets [4] on which many approaches have been proposed. For a fair comparison, the proposed method must be compared against these datasets. It would be important to clarify if the method can be used on multi-label datasets or not, if so, it needs to be evaluated on the XML datasets [3]. [1] PPDSparse - http://www.kdd.org/kdd2017/papers/view/a-parallel-and-primal-dual-sparse-method-for-extreme-classification [2] DiSMEC - https://arxiv.org/abs/1609.02521 [3] http://manikvarma.org/downloads/XC/XMLRepository.html [4] http://lshtc.iit.demokritos.gr/LSHTC2_CFP
iclr_2018_Bki4EfWCb
Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity of the recognition network to generalize inference over all datapoints. We analyze approximate inference in variational autoencoders in terms of these factors. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.
======= Update: The new version addresses some of my concerns. I think this paper is still pretty borderline, but I increased my rating to a 6. ======= This article examines the two sources of loose bounds in variational autoencoders, which the authors term “approximation error” (slack due to using a limited variational family) and “amortization error” (slack due to the inference network not finding the optimal member of that family). The existence of amortization error is often ignored in the literature, but (as the authors point out) it is not negligible. It has been pointed out before in various ways, however: * Hjelm et al. (2015; https://arxiv.org/pdf/1511.06382.pdf) observe it for directed belief networks (admittedly a different model class). * The ladder VAE paper by Sonderby et al. (2016, https://arxiv.org/pdf/1602.02282.pdf) uses an architecture that reduces the work that the encoder network needs to do, without increasing the expressiveness of the variational approximation. That this approach works well implies that amortization error cannot be ignored. * The structured VAE paper by Johnson et al. (2016, https://arxiv.org/abs/1603.06277) also proposes an architecture that reduces the load on the inference network. * The very recent paper by Krishnan et al. (posted to arXiv days before the ICLR deadline, although a workshop version was presented at the NIPS AABI workshop last year; http://approximateinference.org/2016/accepted/KrishnanHoffman2016.pdf) examines amortization error as a core cause of training failures in VAEs. They also observe that the gap persists at test time, although it does not examine how it relates to approximation error. Since these earlier results existed, and approximation-amortization decomposition is fairly simple (although important!), the main contributions of this paper are the empirical studies. I will try to summarize the main novel (i.e., not present elsewhere in the literature) results of these: Section 5.1: Inference networks with FFG approximations can produce qualitatively embarrassing approximations. Section 5.2: When trained on a small dataset, training amortization error becomes negligible. I found this surprising, since it’s not at all clear why dataset size should lead to “strong inference”. It seems like a more likely explanation is that the decoder doesn’t have to work as hard to memorize the training set, so it has some extra freedom to make the true posterior look more like a FFG. Also, I think it’s a bit of an exaggeration to call a gap of 2.71 nats “much tighter” than a gap of 3.01 nats. Section 5.3: Amortization error is an important contributor to the slack in the ELBO on MNIST, and the dominant contributor on the more complicated Fashion MNIST dataset. (This is totally consistent with Krishnan et al.’s finding that eliminating amortization error gave a bigger improvement for more complex datasets than for MNIST.) Section 5.4: Using a restricted variational family causes the decoder to learn to induce posteriors that are easier to approximate with that variational family. This idea has been around for a long time (although I’m having a hard time coming up with a reference). These results are interesting, but given the empirical nature of this paper I would have liked to see results on more interesting datasets (Celeb-A, CIFAR-10, really anything but MNIST). Also, it seems as though none of the full-dataset MNIST models have been trained to convergence, which makes it a bit difficult to interpret some results. A few more specific comments: 2.2.1: The \cdot seems extraneous to me. 5.1: What dataset/model was this experiment done on? Figure 3: This can be inferred from the text (I think), but I had to remind myself that “IW train” and “IW test” refer only to the evaluation procedure, not the training procedure. It might be good to emphasize that you don’t train on the IWAE bound in any experiments. Table 2: It would be good to see standard errors on these numbers; they may be quite high given that they’re only evaluated on 100 examples. “We can quantitatively determine how close the posterior is to a FFG distribution by comparing the Optimal FFG bound and the Optimal Flow bound.”: Why not just compare the optimal with the AIS evaluation? If you trust the AIS estimate, then the result will be the actual KL divergence between the FFG and the true posterior.
iclr_2018_SJZsR7kCZ
Machine learning and in particular deep learning approaches have outperformed many traditional techniques in accomplishing complex tasks such as image classfication (Krizhevsky et al., 2012), natural language processing or speech recognition . Most of the state-of-the art deep networks have complex architecture and use a vast number of parameters to reach this superior performance. Though these networks use a large number of learnable parameters, those parameters present significant redundancy (de Freitas, 2013). Therefore, it is possible to compress the network without much affecting its accuracy by eliminating those redundant and unimportant parameters. In this work, we propose a three stage compression pipeline, which consists of pruning, weight sharing and quantization to compress deep neural networks. Our novel pruning technique combines magnitude based ones with dense sparse dense Han et al. (2016) ideas and iteratively finds for each layer its achievable sparsity instead of selecting a single threshold for the whole network. Unlike previous works, where compression is only applied on networks performing classification, we evaluate and perform compression on networks for classification as well as semantic segmentation, which is greatly useful for understanding scenes in autonomous driving. We tested our method on LeNet-5 and FCNs, performing classification and semantic segmentation, respectively. With LeNet-5 on MNIST, pruning reduces the number of parameters by 15.3 times and storage requirement from 1.7 MB to 0.006 MB with accuracy loss of 0.03%. With FCN8 on Cityscapes, we decrease the number of parameters by 8 times and reduce the storage requirement from 537.47 MB to 18.23 MB with class-wise intersection-over-union (IoU) loss of 4.93% on the validation data.
quality: this paper is of good quality clarity: this paper is very clear originality: this paper combines original ideas with existing approaches for pruning to obtain dramatic space reduction in NN parameters. significance: this paper seems significant. PROS - a new approach to sparsifying that considers different thresholds for each layer - a systematic, empirical method to obtain optimal sparsity levels for a given neural network on a task. - Very interesting and extensive experiments that validate the reasoning behind the described approach, with a detailed analysis of each step of the algorithm. CONS - Pruning time. Although the authors argue that the pruning algorithm is not prohibitive, I would argue that >1 month to prune LeNet-5 for MNIST is certainly daunting in many settings. It would benefit the experimental section to use another dataset than MNIST (e.g. CIFAR-10) for the image recognition experiment. - It is unclear whether this approach will always work well; for some neural nets, the currently used sparsification method (thresholding) may not perform well, leading to very little final sparsification to maintain good performance. - The search for the optimal sparsity in each level seems akin to a brute-force search. Although possibly inevitable, it would be valuable to discuss whether or not this approach can be refined. Main questions - You mention removing "unimportant and redundant weights" in the pruning step; in this case, do unimportant and redundant have the same meaning (smaller than a given threshold), or does redundancy have another meaning (e.g. (Mariet, Sra, 2016))? - Algorithm 1 finds the best sparsity for a given layer that maintains a certain accuracy. Have you tried using a binary search for the best sparsity instead of simply decreasing the sparsity by 1% at each step? If there is a simple correlation between sparsity and accuracy, that might be faster; if there isn't (which would be believable given the complexity of neural nets), it would be valuable to confirm this with an experiment. - Have you tried other pruning methods than thresholding to decide on the optimal sparsity in each layer? - Could you please report the final accuracy of both models in Table 2? Nitpicks: - paragraph break in page 4 would be helpful.
iclr_2018_BJehNfW0-
Published as a conference paper at ICLR 2018 DO GANS LEARN THE DISTRIBUTION? SOME THEORY AND EMPIRICS
This paper proposes a clever new test based on the birthday paradox for measuring diversity in generated samples. The main goal is to quantify mode collapse in state-of-the-art generative models. The authors also provide a specific theoretical construction that shows bidirectional GANs cannot escape specific cases of mode collapse. Using the birthday paradox test, the experiments show that GANs can learn and consistently reproduce the same examples, which are not necessarily exactly the same as training data (eg. the triplets in Figure 1). The results are interpreted to mean that mode collapse is strong in a number of state-of-the-art generative models. Bidirectional models (ALI, BiGANs) however demonstrate significantly higher diversity that DCGANs and MIX+DCGANs. Finally, the authors verify empirically the hypothesis that diversity grows linearly with the size of the discriminator. This is a very interesting area and exciting work. The main idea behind the proposed test is very insightful. The main theoretical contribution stimulates and motivates much needed further research in the area. In my opinion both contributions suffer from some significant limitations. However, given how little we know about the behavior of modern generative models, it is a good step in the right direction. 1. The biggest issue with the proposed test is that it conflates mode collapse with non-uniformity. The authors do mention this issue, but do not put much effort into evaluating its implications in practice, or parsing Theorems 1 and 2. My current understanding is that, in practice, when the birthday paradox test gives a collision I have no way of knowing whether it happened because my data distribution is modal, or because my generative model has bad diversity. Anecdotally, real-life distributions are far from uniform, so this should be a common issue. I would still use the test as a part of a suite of measurements, but I would not solely rely on it. I feel that the authors should give a more prominent disclaimer to potential users of the test. 2. Also, given how mode collapse is the main concern, it seems to me that a discussion on coverage is missing. The proposed test is a measure of diversity, not coverage, so it does not discriminate between a generator that produces all of its samples near some mode and another that draws samples from all modes of the true data distribution. As long as they yield collisions at the same rate, these two generative models are ‘equally diverse’. Isn’t coverage of equal importance? 3. The other main contribution of the paper is Theorem 3, which shows—via a very particular construction on the generator and encoder—that bidirectional GANs can also suffer from serious mode collapse. I welcome and are grateful for any theory in the area. This theorem might very well capture the underlying behavior of bidirectional GANs, however, being constructive, it guarantees nothing in practice. In light of this, the statement in the introduction that “encoder-decoder training objectives cannot avoid mode collapse” might need to be qualified. In particular, the current statement seems to obfuscate the understanding that training such an objective would typically not result into the construction of Theorem 3.
iclr_2018_HJ5AUm-CZ
Hierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the result, which we call a Variational Homoencoder (VHE), may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases. Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood. With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers. The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters.
- Good work on developing VAEs for few-shot learning. - Most of the results are qualitative and I reckon the paper was written in haste. - The rest of the comments are below: - 3.1: I got a bit confused over what X actually is: -- "We would like to learn a generative model for **sets X** of the form". --"... to refer to the **class X_i** ...". -- "we can lower bound the log-likelihood of each **dataset X** ..." - 3.2: "In general, if we wish to learn a model for X in which each latent variable ci affects some arbitrary subset Xi of the data (**where the Xi may overlap**), ...": Which is just like learning a Z for a labeled X but learning it in an unsupervised manner, i.e. the normal VAE, isn't it? If not, could you please elaborate on what is different (in the case of 3.2 only, I mean)? i.e. Could you please elaborate on what's different (in terms of learning) between 3.2 and a normal latent Z that is definitely allowed to affect different classes of the data without knowing the classes? - Figure 1 is helpful to clarify the main idea of a VHE. - "In a VHE, this recognition network takes only small subsets of a class as input, which additionally ...": And that also clearly leads to loss of information that could have been used in learning. So there is a possibility for potential regularization but there is definitely a big loss in estimation power. This is obviously possible with any regularization technique, but I think it is more of an issue here since parts of the data are not even used in learning. - "Table 4.1 compares these log likelihoods, with VHE achieving state-of-the-art. To": Where is Table 4.1?? - This is a minor point and did not have any impact on the evaluation but VAE --> VHE, reparameterization trick --> resampling trick. Maybe providing rather original headings is better? It's a style issue that is up to tastes anyway so, again, it is minor. - "However, sharing latent variables across an entire class reduces the encoding cost per element is significantly": typo. - "Figure ?? illustrates".
iclr_2018_Hk8XMWgRb
Published as a conference paper at ICLR 2018 NOT-SO-RANDOM FEATURES We propose a principled method for kernel learning, which relies on a Fourieranalytic characterization of translation-invariant or rotation-invariant kernels. Our method produces a sequence of feature maps, iteratively refining the SVM margin. We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game. Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.
In this paper the authors consider learning directly Fourier representations of shift/translation invariant kernels for machine learning applications. They choose the alignment of the kernel to data as the objective function to optimize. They empirically verify that the features they learned lead to good quality SVM classifiers. My problem with that paper is that even though at first glance learning adaptive feature maps seems to be an attractive approach, authors' contribution is actually very little. Below I list some of the key problems. First of all the authors claim in the introduction that their algorithm is very fast and with provable theoretical guarantees. But in fact later they admit that the problem of optimizing the alignment is a non-convex problem and the authors end up with a couple of heuristics to deal with it. They do not really provide any substantial theoretical justification why these heuristics work in practice even though they observe it empirically. The assumptions that large Fourier peaks happen close to origin is probably well-justified from the empirical point of view, but it is a hack, not a well established well-grounded theoretical method (the authors claim that in their experiments they found it easy to find informative peaks, even in hundreds of dimensions, but these experiments are limited to the SVM setting, I have no idea how these empirical findings would translate to other kernelized algorithms using these adaptive features). The Langevin dynamics algorithm used by the authors to find the peaks (where the gradient is available) gives only weak theoretical guarantees (as the authors actually admit) and this is a well known method, certainly not a novelty of that paper. Finally, the authors notice that "In the rotation-invariant case, where Ω is a discrete set, heuristics are available". That is really not very informative (the authors refer to the Appendix so I carefully read that part of the Appendix, but it is extremely vague, it is not clear at all how the Langevin dynamics can be "emulated" by a discrete Markov chain that mixes fast; the authors do not provide any justification of that approach, what is the mixing time ?; how the "good emulation property" is exactly measured ?). In the conclusions the authors admit that: "Many theoretical questions remain, such as accelerating the search for Fourier peaks". I think that the problem of accelerating this approach is a critical point that this publication is missing. Without this, it is actually really hard to talk about general mechanism of learning adaptive Fourier features for kernel algorithms (which is how the authors present their contribution); instead we have a method heavily customized and well-tailored to the (not particularly exciting) SVM scenario (with optimization performed by the standard annealing method; it is not clear at all whether for other downstream kernel applications this approach for optimizing the alignment would provide good quality models) that uses lots of task specific hacks and heuristics to efficiently optimize the alignment. Another problem is that it is not clear at all to me how authors' approach can be extended to non shift-invariant kernels that do not benefit from Bochner's Theorem. Such kernels are very related to neural networks (for instance PNG kernels with linear rectifier nonlinearities correspond to random layers in NNs with ReLU) and in the NN context are much more interesting that radial basis function or in general shift-invariant kernels. A general kernel method should address this issue (the authors just claim in the conclusions that it would be interesting to explore the NN context in more detail). To sum it up, it is a solid submission, but in my opinion without a substantial contribution and working only in a very limited setting when it is heavily relying on many unproven hacks and heuristics.
iclr_2018_B1mvVm-C-
Published as a conference paper at ICLR 2018 UNIVERSAL AGENT FOR DISENTANGLING ENVIRONMENTS AND TASKS Recent state-of-the-art reinforcement learning algorithms are trained under the goal of excelling in one specific task. Hence, both environment and task specific knowledge are entangled into one framework. However, there are often scenarios where the environment (e.g. the physical world) is fixed while only the target task changes. Hence, borrowing the idea from hierarchical reinforcement learning, we propose a framework that disentangles task and environment specific knowledge by separating them into two units. The environment-specific unit handles how to move from one state to the target state; and the task-specific unit plans for the next target state given a specific task. The extensive results in simulators indicate that our method can efficiently separate and learn two independent units, and also adapt to a new task more efficiently than the state-of-the-art methods.
The authors propose to decompose reinforcement learning into a PATH function that can learn how to solve reusable sub-goals an agent might have in a specific environment and a GOAL function that chooses subgoals in order to solve a specific task in the environment using path segments. So I guess it can be thought of as a kind of hierarchical RL. The exposition of the model architecture could use some additional detail to clarify some steps and possibly fix some minor errors (see below). I would prefer less material but better explained. I had to read a lot of sections more than once and use details across sections to fill in gaps. The paper could be more focused around a single scientific question: does the PATH function as formulated help? The authors do provide a novel formulation and demonstrate the gains on a variety of concrete problems taken form the literature. I also like that they try to design experiments to understand the role of specific parts of the proposed architecture. The graphs are WAY TOO SMALL to read. Figure #s are missing off several figures. MODEL & ARCHITECTURE The PATH function given a current state s and a goal state s', returns a distribution over the best first action to take to get to the goal P(A). ( If the goal state s’ was just the next state, then this would just be a dynamics model and this would be model-based learning? So I assume there are multiple steps between s and s’?). At the beginning of section 2.1, I think the authors suggest the PATH function could be pre-trained independently by sampling a random state in the state space to be the initial state and a second random state to be the goal state and then using an RL algorithm to find a path. Presumably, once one had found a path ( (s, a0), (s1, a1), (s2, a2), …, (sn-1,an-1), s’ ) one could then train the PATH policy on the triple (s, s’, a0) ? This seems like a pretty intense process: solving some representative subset of all possible RL problems for a particular environment … Maybe one choses s and s’ so they are not too far away from each other (the experimental section later confirms this distance is >= 7. Maybe bring this detail forward)? The expression Trans’( (s,s’), a) = (Trans(s,a), s’) was confusing. I think the idea here is that the expression Trans’( (s,s’) , a ) represents the n-step transition function and ‘a' represents the first action? The second step is to train the goal function for a specific task. So I gather our policy takes the form of a composed function and the chain rule gives close to their expression in 2.2 PATH( s, Tau( s, th^g ), a ; th^p ) d / { d th^g } PATH( s, Tau( s, th^g ), a ; th^p ) = {d / d {s’ } PATH } ( s, Tau( s, th^g ), a ) d / {d th^g} Tau( s, th^g) What is confusing is that they define A( s, a, th^p, th^g, th^v ) = sum_i gamma^i r_{t+1} + gamma^k V( s_{t+k} ; th^v ) - V( s_t ; th^v ) The left side contains th^p and th^g, but the right side does not. Should these parameters be take out of the n-step advantage function A? The second alternative for training the goal function tau seems confusing. I get that tau is going to be constrained by whatever representation PATH function was trained on and that this representation might affect the overall performance - performance. I didn’t get the contrast with method one. How do we treat the output of Tau as an action? Are you thinking of the gradient coming back through PATH as a reward signal? More detail here would be helpful. EXPERIMENTS: Lavaworld: authors show that pretraining the PATH function on longer 7-11 step policies leads to better performance when given a specific Lava world problem to solve. So the PATH function helps and longer paths are better. This seems reasonable. What is the upper bound on the size of PATH lengths you can train? Reachability: authors show that different ways of abstracting the state s into a vector encoding affect the performance of the system. From a scientific point of view, this seems orthogonal to the point of the paper, though is relevant if you were trying to build a system. Taxi: the authors train the PATH problem on reachability and then show that it works for TAXI. This isn’t too surprising. Both picking up the passenger (reachability) and dropping them off somewhere are essentially the same task: moving to a point. It is interesting that the Task function is able to encode the higher level structure of the TAXI problem’s two phases. Another task you could try is to learn to perform the same task in two different environments. Perhaps the TAXI problem, but you have two different taxis that require different actions in order to execute the same path in state space. This would require a phi(s) function that is trained in a way that doesn’t depend on the action a. ATARI 2600 games: I am not sure what state restoration is. Is this where you artificially return an agent to a state that would normally be hard to reach? The authors show that UA results in gains on several of the games. The authors also demonstrate that using multiple agents with different policies can be used to collect training examples for the PATH function that improve its utility over training examples collected by a single agent policy. RELATED WORK: Good contrast to hierarchical learning: we don’t have switching regimes here between high-level options I don’t understand why the authors say the PATH function can be viewed as an inverse? Oh - now I get it. Because it takes an extended n-step transition and generates an action.
iclr_2018_HkJ1rgbCb
Deep learning algorithms are increasingly used in modeling chemical processes. However, black box predictions without rationales have limited used in practical applications, such as drug design. To this end, we learn to identify molecular substructures -rationales -that are associated with the target chemical property (e.g., toxicity). The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task. We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function. We evaluate the approach on two benchmark toxicity datasets. We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales. Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments.
This paper presents an interesting approach to identify substructural features of molecular graphs contributing to the target task (e.g. predicting toxicity). The algorithm first builds two conv nets for molecular graphs, one is for searching relevant substructures (policy improvement), and another for evaluating the contribution of selected substructures to the target task (policy evaluation). These two phases are iterated in a reinforcement learning manner as policy iterations. Both parts are based on conv nets for molecular graphs, and this framework is a kind of 'self-supervised' scheme compared to the standard situations that the environment provides rewards. The experimental validations demonstrate that this model can learn a competitive-performed conv nets only dependent on the highlighted substructures, as well as reporting some case study on the inhibition assay for hERG proteins. Technically speaking, the proposed self-supervised scheme with two conv nets is very interesting. This demonstrates how we can perform progressive substructure selections over molecular graphs to highlight relevant substructures as well as maximizing the prediction performance. Given that conv nets for molecular graphs are not trivially interpretable, this would provides a useful approach to use conv nets for more explicit interpretations of how the task can be performed by neural nets. However, at the same time, I had one big question about the purpose and usage of this approach. As the paper states in Introduction, the target problem is 'hard selection' of substructures, rather than 'soft selection' that neural nets (with attention, for example) or neural-net fingerprints usually provide. Then, the problem would become a combinatorial search problem, which has been long studied in the data mining and machine learning community. There would exist many exact methods such as LEAP, CORK, and graphSig under the name of 'contrast/emerging/discriminative' pattern mining exactly developed for this task. Also, it is widely known that we can even perform a wrapper approach for supervised learning from graphs simultaneously with searching all relevant subgraphs as seen in Kudo+ NIPS 2004, Tsuda ICML 2007, Saigo+ Machine Learning 2009, etc. It would be unconvincing that the proposed neural nets approach fits to this hard combinatorial task rather than these existing (mostly exact) methods. In addition to the above point, several technical points below would also be unclear. - A simple heuristic by adding 'selected or not' variables to the atom features works as intended? Because this is fed to the conv net, it seems we can ignore this elements of features by tweaking the weight parameters accordingly. If the conv net performs the best when we use the entire structure, then learning might be forced to ignore the selection. Can we guarantee in some sense this would not happen? - Zeroing out the atom features also sounds quite simple and a bit groundless. Confusingly, the P network also has an attention mechanism, and it is a bit unclear to me what was actually worked. - In the experiments, the baseline is based on LR, but this would not be fair because usually we cannot expect any linear relationship for molecular fingerprints. It's highly correlated due to the inclusion relationships between subgraphs. At least, any nonlinear baseline (e.g. Random forest or something?) should be presented for discussing the results. Pros: - interesting self-supervised framework provided for highlighting relevant substructures for a given prediction task - the hard selection setting is encoded in input graph featurization Cons: - it would be a bit unconvincing that identifying 'hard selection' is better suited for neural nets, rather than many existing exact methods (without using neural networks). At least one of the typical ones should be compared or discussed. - I'm still not quite sure whether or not some heuristic parts work as intended.
iclr_2018_HJdXGy1RW
We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks with simple network architecture. Moreover, through investigating the behavior and performance of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.
The paper presents a new CNN architecture: CrescendoNet. It does not have skip connections yet performs quite well. Overall, I think the contributions of this paper are too marginal for acceptance in a top tier conference. The architecture is competitive on SVHN and CIFAR 10 but not on CIFAR 100. The performance is not strong enough to warrant acceptance by itself. FractalNets amd DiracNets (https://arxiv.org/pdf/1706.00388.pdf) have demonstrated that it is possible to train deep networks without skip connections and achieve high performance. While CrescendoNet seems to slightly outperform FractalNet in the experiments conducted, it is itself outperformed by DiracNet. Hence, CrescendoNet does not have the best performance among skip connection free networks. You claim that FractalNet shows no ensemble behavior. This is clearly not true because FractalNet has ensembling directly built in, i.e. different paths in the network are explicitly averaged. If averaging paths leads to ensembling in CrescendoNet, it leads to ensembling in FractalNet. While the longest path in FractalNet is stronger than the other members of the ensemble, it is nevertheless an ensemble. Besides, as Veit showed, ResNet also shows ensemble behavior. Hence, using ensembling in deep networks is not a significant contribution. The authors claim that "Through our analysis and experiments, we note that the implicit ensemble behavior of CrescendoNet leads to high performance". I don't think the experiments show that ensemble behavior leads to high performance. Just because a network performs averaging of different paths and individual paths perform worse than sets of paths doesn't imply that ensembling as a mechanism is in fact the cause of the performance of the entire architecture. Similary, you say "On the other hand, the ensemble model can explain the performance improvement easily." Veit et al only claimed that ensembling is a feature of ResNet, but they did not claim that this was the cause of the performance of ResNet. Path-wise training is not original enough or indeed different enough from drop-path to count as a major contribution. You claim that the number of layers "increase exponentially" in FractalNet. This is misleading. The number of layers increases exponentially in the number of paths, but not in the depth of the network. In fact, the number of layers is linear in the depth of the network. Since depth is the meaningful quantity here, CrescendoNet does not have an advantage over FractalNet in terms of layer number. Also, it is always possible to simply add more paths to FractalNet if desired without increasing depth. Instead of using 1 long paths, one can simply use 2, 3, 4 etc. While this is not explicitly mentioned in the FractalNet paper, it clearly would not break the design principle of FractalNet which is to train a path of multiple layers by ensembling it with a path of fewer layers. CrescendoNets do not extend beyond this design principle. You say that "First, path-wise training procedure significantly reduces the memory requirements for convolutional layers, which constitutes the major memory cost for training CNNs. For example, the higher bound of the memory required can be reduced to about 40% for a Crescendo block with 4 paths where interval = 1." This is misleading, as you need to store the weights of all convolutional layers to compute the forward pass and the majority of the weights of all convolutional layers to compute the backward pass, no matter how many weights you intend to update. In a response to a question I posed, you mentioned that we you meant was "we use about 40% memory for the gradient computation and storage". Fair enough, but "gradient computation and storage" is not mentioned in the paper. Also, the reduction to 40% does not apply e.g. to vanilla SGD because the computed gradient can be immediately added to the weights and does not need to be stored or combined with e.g. a stored momentum term. Finally, nowhere in the paper do you mention which nonlinearities you used or if you used any at all. In future revisions, this should be rectified. While I can definitely imagine that your network architecture is well-designed and a good choice for image classification tasks, there is a very saturated market of papers proposing various architectures for CIFAR-10 and related datasets. To be accepted to ICLR, either outstanding performance or truly novel design principles are required.
iclr_2018_SyvCD-b0W
This work provides an automatic machine learning (AutoML) modelling architecture called Autostacker. Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm. Neither prior domain knowledge about the data nor feature preprocessing is needed. We significantly reduce the time of AutoML with a naturally inspired algorithm -Parallel Hill Climbing (PHC). By parallelizing PHC, Autostacker can provide candidate pipelines with sufficient prediction accuracy within a short amount of time. These pipelines can be used as is or as a starting point for human experts to build on. By focusing on the modelling process, Autostacker breaks the tradition of following fixed order pipelines by exploring not only single model pipeline but also innovative combinations and structures. As we will show in the experiment section, Autostacker achieves significantly better performance both in terms of test accuracy and time cost comparing with human initial trials and recent popular AutoML system.
The authors introduce a simple hill climbing approach to (very roughly) search in the space of cascades of classifiers. They first reinvent the concept of cascades of classifiers as an extension of stacking (https://en.wikipedia.org/wiki/Cascading_classifiers). Cascading is like stacking but carries over all original model inputs to the next classifier. The authors cast this nicely into a network view with nodes that are classifiers and layers that use the outputs from previous layers. However, other than relating this line of work to the ICLR community, this interpretation of cascading is not put to any use. The paper incorrectly claims that existing AutoML frameworks only allow using a specific single model. In fact, Auto-sklearn (Feurer et al, 2015) automatically constructs ensembles of up to 50 models, helping it to achieve more robust performance. I have some questions about the hillclimbing approach: - How is the "one change" implemented in the hill climber? Does this evaluate results for each of several single changes and pick the best one? Or does it simply change one classifier and continue? Or does it evaluate all possible individual changes and pick the best one? I note that the term "HillClimber" would suggest that some sort of improvement has to be made in each step, but the algorithm description does not show any evaluation step at this point. The hill climbing described in the text seems to make sense, but the pseudocode appears broken. Section 4.2: I am surprised that there is only a comparison to TPOT, not one to Auto-sklearn. Especially since Auto-sklearn constructs ensembles posthoc this would be an interesting comparison. As the maximum range of number of layers is 5, I assume that scaling is actually an issue in practice after all, and the use of hundreds of primitive models alluded to in the introduction are not a reality at this point. The paper mentions guarantees twice: - "This kind of guarantee of not being worse on average comes from the the characteristic of AutoStacked" - "can be guaranteed to do better on average" I am confident that this is a mistake / an error in choosing the right expression in English. I cannot see why there should be a guarantee of any sort. Empirically, Autostacker appears better than RandomForest, but that is not a big feat. The improvements vs. TPOT are more relevant. One question: the data sets used in Olson et al are very small. Does TPOT overfit on these? Since AutoStacker does not search as exhaustively, could this explain part of the performance difference? How many models are evaluated in total by each of the methods? I am unsure about the domain for the HillClimber. Does it also a search over which classifiers are used where in the pipeline, or only about their hyperparameters? Minor issues: - The authors systematically use citations wrongly, apparently never using citep but only having inline citations. - Some parts of the paper feel unscientific, such as using phrases like "giant possible search space". - There are also several English grammar mistakes (e.g., see the paragraph containing "for the discover") and typos. - Why exactly would a small amount of data be more likely to be unbalanced? - The data "cleaning" method of throwing out data with missing values is very unclean. I hope this has only been applied to the training set and that no test set data points have been dropped? - Line 27 of Algorithm 1: sel_pip has not been defined here Overall, this is an interesting line of work, but it does not seem quite ready for publication. Pros: - AutoML is a topic of high importance to both academia and industry - Good empirical results Cons: - Cascading is not new - Unclear algorithm: what exactly does the Hillclimber function do? - Missing baseline comparison to Auto-sklearn - Incorrect statements about guarantees
iclr_2018_H1VGkIxRZ
ENHANCING THE RELIABILITY OF OUT-OF-DISTRIBUTION IMAGE DETECTION IN NEURAL NETWORKS We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in-and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach (Hendrycks & Gimpel, 2017) by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10 and Tiny-ImageNet) when the true positive rate is 95%.
-----UPDATE------ The authors addressed my concerns satisfactorily. Given this and the other reviews I have bumped up my score from a 5 to a 6. ---------------------- This paper introduces two modifications that allow neural networks to be better at distinguishing between in- and out- of distribution examples: (i) adding a high temperature to the softmax, and (ii) adding adversarial perturbations to the inputs. This is a novel use of existing methods. Some roughly chronological comments follow: In the abstract you don't mention that the result given is when CIFAR-10 is mixed with TinyImageNet. The paper is quite well written aside from some grammatical issues. In particular, articles are frequently missing from nouns. Some sentences need rewriting (e.g. in 4.1 "which is as well used by Hendrycks...", in 5.2 "performance becomes unchanged"). It is perhaps slightly unnecessary to give a name to your approach (ODIN) but in a world where there are hundreds of different kinds of GANs you could be forgiven. I'm not convinced that the performance of the network for in-distribution images is unchanged, as this would require you to be able to isolate 100% of the in-distribution images. I'm curious as to what would happen to the overall accuracy if you ignored the results for in-distribution images that appear to be out-of-distribution (e.g. by simply counting them as incorrect classifications). Would there be a correlation between difficult-to-classify images, and those that don't appear to be in distribution? When you describe the method it relies on a threshold delta which does not appear to be explicitly mentioned again. In terms of experimentation it would be interesting to see the reciprocal of the results between two datasets. For instance, how would a network trained on TinyImageNet cope with out-of-distribution images from CIFAR 10? Section 4.5 felt out of place, as to me, the discussion section flowed more naturally from the experimental results. This may just be a matter of taste. I did like the observations in 5.1 about class deviation, although then, what would happen if the out-of-distribution dataset had a similar class distribution to the in-distribution one? (This is in part, addressed in the CIFAR80 20 experiments in the appendices). This appears to be a borderline paper, as I am concerned that the method isn't sufficiently novel (although it is a novel use of existing methods). Pros: - Baseline performance is exceeded by a large margin - Novel use of adversarial perturbation and temperature - Interesting analysis Cons: - Doesn't introduce and novel methods of its own - Could do with additional experiments (as mentioned above) - Minor grammatical errors
iclr_2018_SkA-IE06W
Published as a conference paper at ICLR 2018 WHEN IS A CONVOLUTIONAL FILTER EASY TO LEARN? We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input. We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches. To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions. Our theory also justifies the two-stage learning rate strategy in deep neural networks. While our focus is theoretical, we also present experiments that justify our theoretical findings.
This paper studies the problem of learning a single convolutional filter using SGD. The main result is: if the "patches" of the convolution are sufficiently aligned with each other, then SGD with a random initialization can recover the ground-truth parameter of a convolutional filter (single filter, ReLU, average pooling). The convergence rate, and how "sufficiently aligned" depend on some quantities related to the underlying data distribution. A major strength of the result is that it can work for general continuous distributions and does not really rely on the input distribution being Gaussian; the main weakness is that some of the distribution dependent quantities are not very intuitive, and the alignment requirement might be very high. Detailed comments: 1. It would be good to clarify what the angle requirement means on page 2. It says the angle between Z_i, Z_j is at most \rho, is this for any i,j? From the later part it seems that each Z_i should be \rho close to the average, which would imply pairwise closeness (with some constant factor). 2. The paper first proves result for a single neuron, which is a clean result. It would be interesting to see what are values of \gamma(\phi) and L(\phi) for some distributions (e.g. Gaussian, uniform in hypercube, etc.) to give more intuitions. 3. The convergence rate depends on \gamma(\phi_0), from the initialization, \phi_0 is probably very close to \pi/2 (the closeness depend on dimension), which is also likely to make \gamma(\phi_0) depend on dimension (this is especially true of Gaussian). 4. More precisely, \gamma(\phi_0) needs to be at least 6L_{cross} for the result to work, and L_{cross} seems to be a problem dependent constant that is not related to the dimension of the data. Also \gamma(\phi_0) depends on \gamma_{avg}(\phi_0) and \rho, when \rho is reasonable (say a constant), \gamma(\phi_0) really needs to be a constant that is independent of dimension. On the other hand, in Theorem 3.4 we can see that the upperbound on \alpha (the quality of initialization) depends on the dimension. 5. Even assuming \rho is a constant strictly smaller than \pi/2 seems a bit strong. It is certainly plausible that nearby patches are highly correlated, but what is required here is that all patches are close to the average. Given an image it is probably not too hard to find an almost all white patch and an almost all dark patch so that they cannot both be within a good angle to the average. Overall I feel the result is interesting but hard to interpret correctly. The details of the theorem do not really support the high level claims very strongly. The paper would be much better if it goes over several example distributions and show explicitly what are the guarantees. The reviewer tried to do that for Gaussian and as I mentioned above (esp. 4) the result does not seem very impressive, maybe there are other distributions where this result works better? After reading the response, I feel the contribution for the single neuron case does not require too much assumptions and is itself a reasonable result. I am still not convinced by the convolution case (which is the main point of this paper), as even though it does not require Gaussian input (a major plus), it still seems very far from "general distribution". Overall this is a first step in an interesting direction, so even though it is currently a bit weak I think it is OK to be accepted. I hope the revised version will clearly discuss the limitations of the approach and potential future directions as the response did.
iclr_2018_HJcjQTJ0W
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints. Cloud-based training schemes provide beneficial services, but suffer from potential privacy risks due to excessive user data collection. To enable cloud-based DNN training while protecting the data privacy simultaneously, we propose to leverage the intermediate representations of the data, which is achieved by splitting the DNNs and deploying them separately onto local platforms and the cloud. The local neural network (NN) is used to generate the feature representations. To avoid local training and protect data privacy, the local NN is derived from pre-trained NNs. The cloud NN is then trained based on the extracted intermediate representations for the target learning task. We validate the idea of DNN splitting by characterizing the dependency of privacy loss and classification accuracy on the local NN topology for a convolutional NN (CNN) based image classification task. Based on the characterization, we further propose PrivyNet to determine the local NN topology, which optimizes the accuracy of the target learning task under the constraints on privacy loss, local computation, and storage. The efficiency and effectiveness of PrivyNet are demonstrated with CIFAR-10 dataset.
1. Paper summary This paper describes a technique using 3 neural networks to privatize data and make predictions: a feature extraction network, an image classification network, and an image reconstruction network. The idea is to learn a feature extraction network so that the image classification network performs well and the image reconstruction network performs poorly. 2. High level paper - subjective I think the presentation of the paper is somewhat scattered: In section 2 the authors introduce their network and their metric for utility and privacy and then immediately do a sensitivity analysis. Section 3 continues with a sensitivity analysis now considering performance and storage of the method. Then 2.5 pages are spent on channel pruning. I would have liked if the authors spent more time justifying why we should trust their method as a privacy preserving technique (described in detail below). The authors clearly performed an impressive amount of sensitivity experiments. Assuming the privacy claims are reasonable (which I have some doubts about below) then this paper is clearly useful to any company wanting to do privacy preserving classification. At the same time I think the paper does not have a significant amount of machine learning novelty in it. 3. High level technical I have a few doubts about this method as a privacy-preserving technique: - Nearly every privacy-preserving technique gives a guarantee, e.g., differential privacy guarantees a statistical notion of privacy and cryptographic methods guarantee a computational notion of privacy. In this work the authors provide a way to measure privacy but there is no guarantee that if someone uses this method their data will be private, by some definition, even under certain assumptions. - Another nice thing about differential privacy and cryptography is that they are impervious to different algorithms because it is statistically hard or computationally hard to reveal sensitive information. Here there could be a better image reconstruction network that does a better job of reconstructing images than the ones used in the paper. - It's not clear to my why PSNR is a useful way to measure privacy loss. I understand that it is a metric to compare two images that is based on the mean-squared error so a very private image should have a low PSNR while a not private image should have a high PSNR, but I have no intuition about how small the PSNR should be to afford a useful amount of privacy. For instances, in nearly all of the images of Figures 21 and 22 I think it would be quite easy to guess the original images. 4. 1/2 sentence summary While the authors did an extensive job evaluating different settings of their technique I have serious doubts about it as a privacy-preserving method.
iclr_2018_rJr4kfWCb
Lung cancer is the leading cause of cancer deaths in the world and early detection is a crucial part of increasing patient survival. Deep learning techniques provide us with a method of automated analysis of patient scans. In this work, we compare AlexNet, a multi-layered and highly flexible architecture, with a custom CNN to determine if lung nodules with patient scans are benign or cancerous. We have found our CNN architecture to be highly accurate (99.79%) and fast while maintaining low False Positive and False Negative rates (< 0.01% and 0.15% respectively). This is important as high false positive rates are a serious issue with lung cancer diagnosis. We have found that AlexNet is not well suited to the problem of nodule identification, though it is a good baseline comparison because of its flexibility.
The authors compare a standard DL machine (AlexNet) with a custom CNN-based solution in the well known tasks of classifying lung tumours into benign or cancerous in the Luna CT scan dataset, concluding that the proposed novel solution performs better. The paper is interesting, but it has a number of issues that prevents it from being accepted for the ICLR conference. First, the scope of the paper, in its present form, is very limited: the idea of comparing the novel solution just with AlexNet is not adding much to the present landscape of methods to tackle this problem. Moreover, although the task is very well known and in the last few year gave rise to a steady flow of solutions and was also the topic of a famous Kaggle competition, no discussion about that can be found in the manuscript. The novel solution is very briefly sketched, and some of the tricks in its architecture are not properly justified: moreover, the performance improvement w.r.t . to AlexNet is hardly supporting the claim. Experimental setup consists of just a single training/test split, thus no confidence intervals on the results can be defined to show the stability of the solution. The whole sections 2.3 and 2.4 include only standard material unnecessary to mention given the target venue, and the references are limited and incomplete. This given, I rate this manuscript as not suitable for ICLR 2018.
iclr_2018_r1saNM-RW
Despite their popularity, even efficient implementations of Support Vector Machines (SVMs) have proven to be computationally expensive to train at a largescale, especially in streaming settings. In this paper, we propose a coreset construction algorithm for efficiently generating compact representations of massive data sets to speed up SVM training. A coreset is a weighted subset of the original data points such that SVMs trained on the coreset are provably competitive with those trained on the original (massive) data set. We provide lower and upper bounds on the number of samples required to obtain accurate approximations to the SVM problem as a function of the complexity of the input data. Our analysis also establishes sufficient conditions on the existence of sufficiently compact and representative coresets for the SVM problem. We empirically evaluate the practical effectiveness of our algorithm against synthetic and real-world data sets.
The paper suggests an importance sampling based Coreset construction for Support Vector Machines (SVM). To understand the results, we need to understand Coreset and importance sampling: Coreset: In the context of SVMs, a Coreset is a (weighted) subset of given dataset such that for any linear separator, the cost of the separator with respect to the given dataset X is approximately (there is an error parameter \eps) the same as the cost with respect to the weighted subset. The main idea is that if one can find a small coreset, then finding the optimal separator (maximum margin etc.) over the coreset might be sufficient. Since the computation is done over a small subset of points, one hopes to gain in terms of the running time. Importance sampling: This is based on the theory developed in Feldman and Langberg, 2011 (and some of the previous works such as Langberg and Schulman 2010, the reference of which is missing). The idea is to define a quantity called sensitivity of a data-point that captures how important this datapoint is with respect to contributing to the cost function. Then a subset of datapoint are sampled based on the sensitivity and the sampled data point is given weight proportional to inverse of the sampling probability. As per the theory developed in these past works, sampling a subset of size proportional to the sum of sensitivities gives a coreset for the given problem. So, the main contribution of the paper is to do all the sensitivity calculations with respect to SVM problem and then use the importance sampling theory to obtain bounds on the coreset size. One interesting point of this construction is that Coreset construction involves solving the SVM problem on the given dataset which may seem like beating the purpose. However, the authors note that one only needs to compute the Coreset of small batches of the given dataset and then use standard procedures (available in streaming literature) to combine the Coresets into a single Coreset. This should give significant running time benefits. The paper also compares the results against the simple procedure where a small uniform sample from the dataset is used for computation. Evaluation: Significance: Coresets give significant running time benefits when working with very big datasets. Coreset construction in the context of SVMs is a relevant problem and should be considered significant. Clarity: The paper is reasonably well-written. The problem has been well motivated and all the relevant issues point out for the reader. The theoretical results are clearly stated as lemmas a theorems that one can follow without looking at proofs. Originality: The paper uses previously developed theory of importance sampling. However, the sensitivity calculations in the SVM context is new as per my knowledge. It is nice to know the bounds given in the paper and to understand the theoretical conditions under which we can obtain running time benefits using corsets. Quality: The paper gives nice theoretical bounds in the context of SVMs. One aspect in which the paper is lacking is the empirical analysis. The paper compares the Coreset construction with simple uniform sampling. Since Coreset construction is being sold as a fast alternative to previous methods for training SVMs, it would have been nice to see the running time and cost comparison with other training methods that have been discussed in section 2.
iclr_2018_H1srNebAZ
Neural networks trained through stochastic gradient descent (SGD) have been around for more than 30 years, but they still escape our understanding. This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons. While being the core building block of deep neural networks, the way they encode information about the inputs and how such encodings emerge is still unknown. We report experiments providing strong evidence that hidden neurons behave like binary classifiers during training and testing. During training, analysis of the gradients reveals that a neuron separates two categories of inputs, which for layers close enough to the output remain impressively constant across training. During testing, we show that the fuzzy, binary partition described above embeds the core information used by the network for its prediction. These observations bring to light some of the core internal mechanics of deep neural networks, and have the potential to guide the next theoretical and practical developments.
-------------------- Review updates: Rating 6 -> 7 Confidence 2 -> 4 The rebuttal and update addressed a number of my concerns, cleared up confusing sections, and moved the paper materially closer to being publication-worthy, thus I’ve increased my score. -------------------- I want to love this paper. The results seem like they may be very important. However, a few parts were poorly explained, which led to this reviewer being unable to follow some of the jumps from experimental results to their conclusions. I would like to be able to give this paper the higher score it may deserve, but some parts first need to be further explained. Unfortunately, the largest single confusion I had is on the first, most basic set of gradient results of section 4.1. Without understanding this first result, it’s difficult to decide to what extent the rest of the paper’s results are to be believed. Fig 1 shows “the histograms of the average sign of partial derivatives of the loss with respect to activations, as collected over training for a random neuron in five different layers.” Let’s consider the top-left subplot of Fig 1, showing a heavily bimodal distribution (modes near -1 and +1.). Is this plot made using data from a single neuron or from multiple neurons? For now let’s assume it is for a single neuron, as the caption and text in 4.1 seem to suggest. If it is for a single neuron, then that neuron will have, for a single input example, a single scalar activation value and a single scalar gradient value. The sign of the gradient will either be +1 or -1. If we compute the sign for each input example and then AGGREGATE over all training examples seen by this neuron over the course of training (or a subset for computational reasons), this will give us a list of signs. Let’s collect these signs into a long list: [+1, +1, +1, -1, +1, +1, …]. Now what do we do with this list? As far as I can tell, we can either average it (giving, say, .85 if the list has far more +1 values than -1 values) OR we can show a histogram of the list, which would just be two bars at -1 and +1. But we can’t do both, indicating that some assumption above was incorrect. Which assumption in reading the text was incorrect? Further in this direction, Section 4.1 claims “Zero partial derivatives are ignored to make the signal more clear.” Are these zero partial derivatives of the post-relu or pre-relu? The text (Sec 3) points to activations as being post-relu, but in this case zero-gradients should be a very small set (only occuring if all neurons on the next layer had either zero pre-relu gradients, which is common for individual neurons but, I would think, not for all at once). Or does this mean the pre-relu gradient is zero, e.g. the common case where the gradient is zeroed because the pre-activation was negative and the relu at that point has zero slope? In this case we would be excluding a large set (about half!) of the gradient values, and it didn’t seem from the context in the paper that this would be desirable. It would be great if the above could be addressed. Below are some less important comments. Sec 5.1: great results! Fig 3: This figure studies “the first and last layers of each network”. Is the last layer really the last linear layer, the one followed by a softmax? In this case there is no relu and the 0 pre-activation is not meaningful (softmax is shift invariant). Or is the layer shown (e.g. “stage3layer2”) the penultimate layer? Minor: in this figure, it would be great if the plots could be labeled with which networks/datasets they are from. Sec 5.2 states “neuron partitions the inputs in two distinct but overlapping categories of quasi equal size.” This experiment only shows that this is true in aggregate, not for specific neurons? I.e. the partition percent for each neuron could be sampled from U(45, 55) or from U(10, 90) and this experiment would not tell us which, correct? Perhaps this statement could be qualified. Table 1: “52th percentile vs actual 53 percentile shown”. > Table 1: The more fuzzy, the higher the percentile rank of the threshold This is true for the CIFAR net but the opposite is true for ResNet, right?